Theory of Computation Seminar Northeastern University

To subscribe to our mailing list, please visit https://lists.ccs.neu.edu/bin/listinfo/theory-talks.

### Fall 2015 Schedule

Tuesday, December 8, 12:30 PM, Elliot Anshelevich (RPI) Approximating Optimal Social Choice under Metric Preferences
[West Village H, room 366]

Thursday, November 12, 3:30 PM - 5 PM, Cris Moore (Santa Fe Institute) Community detection in networks
[177 Huntington Avenue, Floor 11, Boston, MA 02115]

Tuesday, November 10, 12:00 PM, Bobby Kleinberg (Cornell University and Microsoft Research New England) Inference-Based Privacy Guarantees for Differentially Private Mechanisms
[West Village H, room 366]

Thursday, November 5, 3:30 PM - 5 PM, Cris Moore (Santa Fe Institute) Phase transitions in random k-SAT
[177 Huntington Avenue, Floor 11, Boston, MA 02115]

Thursday, October 29, 3:30 PM - 5 PM, Cris Moore (Santa Fe Institute) Phase transitions in random graphs
[177 Huntington Avenue, Floor 11, Boston, MA 02115]

Thursday, October 22, 3:30 PM - 5 PM, Cris Moore (Santa Fe Institute) Computational complexity and landscapes
[177 Huntington Avenue, Floor 11, Boston, MA 02115]

### Spring 2015 Schedule

Thursday, February 19 and 26, 12 PM, Rajmohan Rajaraman (Northeastern University) Probabilistic Tree Embeddings
[In West Village H, room 166]

Thursday, February 12, 12 PM, Silas Richelson (UCLA) Topology-Hiding Computation
[In West Village H, room 166]

Thursday, January 29, 12 PM, Ravi Sundaram (Northeastern University) Secure and scalable match: overcoming the universal circuit bottleneck using group programs
[In West Village H, room 166]

Thursday, January 22, 12 PM, Vanishree Rao (UCLA) Adaptive Multiparty Non-Interactive Key Exchange without Setup in the Standard Model
[In West Village H, room 166]

Thursday, January 15, 12 PM, Marcos Kiwi (University of Chile) A Bound for the Diameter of Random Hyperbolic Graphs
[In West Village H, room 166]

### Fall 2014 Schedule

Thursday, December 11, 4 PM, Tim Smith (Northeastern University) Determination and Prediction of Infinite Words by Automata
[In West Village H, room 366]

Thursday, December 4, 4 PM, Elad Haramaty (Northeastern University) Linearity Testing
[In West Village H, room 366]

Thursday, November 20, 4 PM, Scott Roche (Northeastern University) Enabling Efficient and Robust Distributed Computation in Highly Dynamic Networks
[In West Village H, room 366]

Thursday, November 13, 4 PM, Zahra Jafargholi (Northeastern University) Tamper Detection and Continuous Non-Malleable Codes
[In West Village H, room 366]

Friday, November 7, 12 PM, Michael Kearns (University of Pennsylvania) Games, Networks, and People
[In Dockser Hall, room 240]

Thursday, November 6, 4 PM, Ryo Nishimaki (NTT Secure Platform Laboratories) How to Watermark Cryptographic Fucntions
[In West Village H, room 366]

Thursday, October 30, 4 PM, Hamid Jahanjou (Northeastern University) Bi-Lipschitz Bijection between the Boolean Cube and the Hamming Ball
[In West Village H, room 366]

Thursday, October 23, 4 PM, Pratyay Mukherjee (Aarhus University) Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
[In West Village H, room 366]

### Spring 2014 Schedule

Wednesday, April 30, 2 PM, Eric Miles (Northeastern University): Towards Bridging Theory and Implementation of Cryptographic Primitives
[In West Village H, room 366.]

Wednesday, April 23, 12 PM, John Augustine (IIT Madras): Storage and Search in Dynamic Peer-to-Peer Networks
[In West Village H, room 366.]

Thursday, April 17, 11:30 AM, Cristopher Moore (Santa Fe Institute): Finding Communities in Networks: Message-passing, Phase Transitions, and New Spectral Methods
[In West Village H, room 366.]

Tuesday, April 15, 10:00 AM, Eric Blais (MIT): Sublinear-time Algorithms for Massive Datasets
[In West Village H, room 366.]

Thursday, March 27, 10:30 AM, Jonathan Ullman (Harvard University): Privacy and the Complexity of Simple Queries
[In West Village H, room 366.]

Monday, March 24, 10:30 AM, Lap Chi Lau (The Chinese University of Hong Kong): Algebraic Techniques in Fundamental Graph Problems
[In West Village H, room 366.]

### Fall 2013 Schedule

Wednesday, December 11, 10:30 AM, Edmund Yeh (Northeastern University): Polar Codes for Multiple Access Channels
[In West Village H, room 366.]

Wednesday, December 4, 10:30 AM, Karthikeyan Chandrasekaran (Harvard University): A Polynomial-Time Cutting Plane Algorithm for Matching
[In West Village H, room 366.]

Wednesday, November 20, 10:30 AM, Mahdi Cheraghchi (MIT): Non-Malleable Coding Against Bit-wise and Split-State Tampering
[In West Village H, room 166.]

Wednesday, October 30, 10:30 AM, Chen Avin (Ben Gurion University): From Caesar to Twitter - An Axiomatic Approach to Elites of Social Networks
[In West Village H, room 366.]

Wednesday, October 23, 10:30 AM, Chin Ho Lee (Northeastern University): Limits of provable security for homomorphic encryption
[In West Village H, room 166.]

OA Wednesday, October 16, 10:30 AM, Abhinav Kumar (MIT): Inverse problems in potential energy minimization, and algorithms for self-assembly
[In West Village H, room 366.]

Wednesday, October 9, 10:30 AM, Pavel Hubacek (Aarhus University): Limits on the Power of Cryptographic Cheap Talk
[In West Village H, room 166.]

Tuesday, September 17, 11:00am, Seny Kamara (Microsoft Research): How to Search on Encrypted Data
[In West Village H, room 366.]

### Spring 2013 Schedule

Wednesday, May 29, 11:45am, Eli Ben-Sasson (Technion): Universal and Affordable Computational Integrity, or, Succinctly, from C to PCP
[In West Village H, room 366.]

Wednesday, May 8, 2:00pm, Yevgeniy Dodis (New York University): Key Derivation Without Entropy Waste
[In West Village H, room 366.]

Thursday, April 25, 10:30am, Eric Miles (Northeastern University): Foundations of cryptography: Towards bridging theory and implementation [thesis proposal]
[In West Village F, room 10.]

Tuesday, April 9, 2:00pm, Daniel Wichs (Northeastern University): Cryptography with imperfect secrets
[In West Village H, room 366.]

Tuesday, March 26, 2:00PM, Yevgeniy Dodis (New York University): Overcoming Weak Expectations
[In West Village H, room 366.]

Monday, Feb 4, 1:30PM, Eric Miles (Northeastern University): Shielding circuits with groups
[In West Village H, room 166.]

### Fall 2012 Schedule

Monday, Oct 1, 10:00AM, Eric Miles (Northeastern University): Substitution-permutation networks, pseudorandom functions, and natural proofs
[Boston University Computer Science Department: MCS137, 111 Cummington Street.]

### Spring 2012 Schedule

Monday, Mar 19, 1:30PM, Thomas Vidick (MIT): Noncommutative Grothendieck inequalities and application to quantum two-player games
[In West Village H, room 366.]

Tuesday, Mar 13, 11:00AM, Eric Miles (Northeastern University): Leakage-resistant cryptography
[In West Village H, room 166.]

Tuesday, Feb 28, 11:30AM, Adam Smith (Pennsylvania State University): Pinning Down "Privacy" in Statistical Databases
[In West Village H, room 366.]

Friday, Feb 10, 2:00PM, Chinmoy Dutta (Northeastern University): Strong Partitions and Universal Steiner Trees for Graphs
[Boston University Computer Science Department: MCS148, 111 Cummington Street.]

Wednesday, Feb 8, 10:00AM, Nishanth Chandran (Microsoft Research, Redmond): Cryptographic protocols in the era of cloud computing
[In West Village H, room 366.]

Monday, Jan 30, 11:30AM, Daniel Wichs (IBM Research, T.J. Watson Center): Maintaining Security Under Imperfect Secrecy
[In West Village H, room 366.]

Monday, Jan 23, 1:30PM, Scott Roche (Northeastern University): The spread of diseases on networks
[In West Village H, room 166.]

Tuesday, Jan 17, 10:30AM, Dov Gordon (Columbia University): Secure Computation: From Theory Towards Practice
[In West Village H, room 366.]

### Fall 2011 Schedule

Thursday, Dec 15, 11:00 AM, Chinmoy Dutta (Northeastern University): More on a problem of Zarankiewicz
[In West Village H, room 166.]

Thursday, Dec 1, 3:00 PM, Eric Miles (Northeastern University): Pseudorandom generators, cryptography, and P vs. NP
[In West Village H, room 366.]

Wednesday, Nov 30, 11:30 AM, Rajmohan Rajaraman (Northeastern University): Information Spreading in Dynamic Networks: On the Power of Forwarding Algorithms
[In West Village H, room 166.]

Wednesday, Nov 16, 11:30 AM, Emanuele Viola (Northeastern University): The communication complexity of addition
[In West Village H, room 166.]

Tuesday, Nov 8, Abhinav Kumar (MIT):
4:30 PM: Inverse energy minimization problems
3:00 PM: Potential energy minimization and universal optimality
1:30 PM: Linear programming bounds in geometry and coding theory
[Each talk takes place in 511 Lake Hall (Northeastern Mathematics Department). Tea will be served at 2:30 and 4:00 PM in 544 Nightingale Hall.]

### Spring 2011 Schedule

Monday, May 2, 2:00 PM, David Karger (MIT): Random Sampling and Cuts in Graphs: Three Combinatorial Proofs

Friday, Apr 29, 11:00 AM, Boaz Patt-Shamir (Tel Aviv University): Algorithmic Recommender Systems

Friday, Apr 22, 2:00 PM, Emanuele Viola (Northeastern University): Extractors for Circuit Sources
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

Tuesday, Mar 8, 1:30 PM, Yang Cai (MIT): On Minmax Theorems for Multiplayer Games

Thursday, Feb 24, 4:30 PM, Joseph Mitchell (Stony Brook): Approximation Algorithms for Traveling Salesman Problem with Neighborhoods and Related Geometric Network Optimization Problems
[Northeastern Mathematics Department: 509 Lake Hall. Tea at 4:00 p.m. in 544 Nightingale Hall.]

Tuesday, Feb 15, 1:30 PM, Ravi Sundaram (Northeastern University): A presentation of the Moser-Scheder derandomization of Schoning's k-SAT algorithm

Friday, Feb 4, 2:00 PM, Amir Shpilka (Technion and Microsoft Research): Recent Results on Polynomial Identity Testing

Tuesday, Jan 25, 1:30 PM, Eric Miles (Northeastern University): Increasing the Stretch of Pseudorandom Generators

Tuesday, Jan 18, 1:30 PM, Chinmoy Dutta (Northeastern University): Constructive Proofs of Concentration Bounds

### Fall 2010 Schedule

Tuesday, Nov 16, 1:30 PM, Emanuele Viola (Northeastern University): Williams' Breakthrough

Friday, Nov 12, 3:00 PM, Scott Aaronson (MIT): The Computational Complexity of Linear Optics
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

Tuesday, Oct 19, 1:30 PM, Bernhard Haeupler (MIT): New Constructive Aspects of the Lovasz Local Lemma

Friday, Oct 1, 2:00 PM, Constantinos Daskalakis (MIT): Geometrically Embedded Local Search

### Spring 2010 Schedule

Monday, May 5, 10:30 AM, Chinmoy Dutta (Max Planck Institute for Informatics): Lower Bounds for Noisy Distributed Computation

Friday, April 30, 3:00 PM, Ravi Sundaram (Northeastern University): Reducibility among Fractional Stability Problems

Monday, April 26, 4:30 PM, Homin K. Lee (University of Texas at Austin): On the Learnability of Monotone Functions

Thursday, April 23, 10:30 AM, Shiva Kasiviswanathan (Los Alamos National Laboratory): A Rigorous Approach to Statistical Database Privacy

Thursday, April 15, 4:30 PM, Henry Cohn (Microsoft Research New England): Finding small solutions of equations: from cryptanalysis to error-correcting codes
[Northeastern Mathematics Department: 509 Lake Hall. Tea at 4:00 p.m. in 544 Nightingale Hall.]

Wednesday, Apr 14, 11:00 AM, Elena Grigorescu (MIT): Symmetries in Algebraic Property

Thursday, Apr 8, 4:30 PM, Assaf Naor (NYU): L_1 embeddings of the Heisenberg group and fast estimation of graph isoperimetry
[Harvard Science Center: Hall D. Tea at 4:00 p.m. in the 4th floor common lounge.]

Thursday, Apr 8, 3:00 PM, Georgios Zervas (Boston University): Information Asymmetries in Pay-Per-Bid Auctions: How Swoopo Makes Bank

Thursday, Feb 1, 11:30 AM, Alex Samorodnitsky (The Hebrew University of Jerusalem): From local to global, property testing and some related topics

Thursday, Jan 28, 11:00 AM, Alexander Gamburd (University of California, Santa Cruz): Expander Graphs: Constructions and Applications

Thursday, Jan 21, 3:30 PM, Katya Scheinberg (Columbia University): First-order methods with complexity bounds for optimization problems arising in statistical machine learning

Tuesday, Jan 12, 10:30 AM, Prasad Tetali (Georgia Tech): Markov Chain Mixing with Applications

### Fall 2009 Schedule

Tuesday, December 15, 11:00 AM, Gopal Pandurangan (Nanyang Technological University and Brown University): Walking Faster in a Distributed Network

Friday, December 11, 2:00 PM, Christopher King (Northeastern University, Department of Mathematics): Superadditivity of quantum channel capacity via random matrix methods

Friday, October 2, 2:00 PM, Brent Heeringa (Williams College): Data Structures for Dynamic Partial-Orders
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

Thursday, September 17, 4:30 PM, Alexander Gamburd (UC Santa Cruz): Expanders: from arithmetic to combinatorics and back
[Northeastern University Mathematics Department: 509 Lake Hall. Tea at 4:00 p.m. in 544 Nightingale Hall.]

Friday, August 21, 11:00 AM, Adam Meyerson (UCLA): On the Price of Mediation

### Spring 2009 Schedule

May 8, 1 PM, Emanuele Viola (Northeastern University): Bits vs. Trits

Apr 10, 4 PM, Lance Fortnow (Northwestern University): Some Recent Results on Structural Complexity
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

Mar 25, 11:30 AM, Moses Charikar (Princeton University): New Insights into Semidefinite Programming for Combinatorial Optimization

Feb 20, 4 PM, Joshua Brody (Dartmouth College): Some Applications of Communication Complexity

Jan 30, 4 PM, Arkadev Chattopadhyay (IAS), Mutliparty Communication Lower Bounds by the Generalized Discrepancy Method
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

### Fall 2008 Schedule

Dec 12, 4 PM, Benjamin Rossman (MIT): The Constant-Depth Complexity of k-Clique

Nov 21, 4 PM, Alex Samorodnitsky (Hebrew University & Microsoft Research): Edge-isoperimetry and influences

Oct 24, 4 PM, Emanuele Viola (Northeastern University): Polynomials over {0,1}
[Boston University Computer Science Department: MCS135, 111 Cummington Street.]

### Fall 2015 Talk Abstracts

December 8, Elliot Anshelevic (RPI)

We examine the quality of social choice mechanisms using a utilitarian view, in which all of the agents have costs for each of the possible alternatives. While these underlying costs determine what the optimal alternative is, they may be unknown to the social choice mechanism; instead the mechanism must decide on a good alternative based only on the ordinal preferences of the agents which are induced by the underlying costs. Due to its limited information, such a social choice mechanism cannot simply select the alternative that minimizes the total social cost (or minimizes some other objective function). Thus, we seek to bound the distortion: the worst-case ratio between the social cost of the alternative selected and the optimal alternative. Distortion measures how good a mechanism is at approximating the alternative with minimum social cost, while using only ordinal preference information. The underlying costs can be arbitrary, implicit, and unknown; our only assumption is that the agent costs form a metric space, which is a natural assumption in many settings. We quantify the distortion of many well-known social choice mechanisms. We show that for both total social cost and median agent cost, many positional scoring rules have large distortion, while on the other hand Copeland and similar mechanisms perform optimally or near-optimally, always obtaining a distortion of at most 5. We also give lower bounds on the distortion that could be obtained by *any* deterministic social choice mechanism, and extend our results on median agent cost to more general objective functions.

November 12, Cris Moore (Santa Fe Institute) Community detection in networks

-- The stochastic block model
-- The analogy between statistical physics and statistical inference
-- Belief propagation and variational methods
-- The detectability transition
-- (If there's time) Spectral clustering

November 10, Bobby Kleinberg (Cornell University and Microsoft Research New England) Inference-Based Privacy Guarantees for Differentially Private Mechanisms

How much information can be learnt about an individual by observing the outcome of a computation? If the computation is differentially private, the answer is: "not much more than if the individual's data had been excluded from the input." A stronger notion of privacy, originally propounded by Dalenius in the 1970's, requires instead that it should not be possible to learn much more about an individual than if the outcome of the computation had never been revealed. Simple examples, as well as a general impossibility theorem of Dwork and Naor, preclude the possibility of supplying this stronger "inferential" guarantee against attackers with arbitrary auxiliary information, assuming the computation is at least minimally useful.

In this talk we revisit the notion of inferential privacy and ask: under what limitations on the adversary's side information can we deduce an inferential guarantee from a differential one? We model the adversary's side information as a prior distribution over datasets (or, more generally, a set of possible priors) and prove two main results. The first result pertains to distributions that satisfy a natural positive-affiliation condition, and gives an upper bound on the inferential privacy guarantee for any differentially private mechanism. This upper bound is matched by a simple mechanism that adds Laplace noise to the sum of the data. The second result pertains to distributions that have weak correlations, defined in terms of a suitable "influence matrix". The result provides an upper bound for inferential privacy in terms of the differential privacy parameter and the spectral norm of this matrix.

November 5, Cris Moore (Santa Fe Institute) Phase transitions in random k-SAT

-- Early history and phenomenology
-- First and second moment bounds
-- Algorithms, clustering, and frozen variables
-- Why we believe in a regime where solutions exist but are hard to find

October 29, Cris Moore (Santa Fe Institute) Phase transitions in random graphs

-- The emergence and size of the giant component
-- Branching processes and differential equations
-- Power laws at criticality
-- The k-core and discontinuous transition

October 22, Cris Moore (Santa Fe Institute) Computational complexity and landscapes

-- Two puzzles: Euler vs. Hamilton
-- P vs. NP and NP-completeness: building computers
-- When greed works: Minimum Spanning Tree and Max Flow
-- Bumpy energy landscapes and local optima

### Spring 2015 Talk Abstracts

February 19 and 26, Rajmohan Rajaraman (Northeastern University) Probabilistic Tree Embeddings

I will review a set of classic results in graph embeddings that have been very influential in approximation algorithms for graph optimization problems. The two central results in this space are due to [Fakcharoenphol-Rao-Talwar 2004] and [Racke 2008]. FRT shows that there exists a probabilistic embedding of any n-point metric into n-point tree metrics in which the expected distortion in distance between any pair of points is O(log(n)), and such an embedding can be obtained in polynomial time. Racke shows that there exists a probabilistic embedding of any n-vertex graph into n-leaf trees such that the expected distortion in the size of any cut is O(log(n)), and such an embedding can be obtained in polynomial time. Both these results enable the reduction of many optimization problems on graphs to optimization problems on trees (which are often much easier to solve) at the loss of an O(log(n)) factor in the approximation. For many problems, this methodology has yielded asymptotically the best possible polynomial-time approximations. I will define and motivate the embedding problems, state the main results known, and present a proof that the two notions of embeddings are equivalent in a sense. Time permitting, I will sketch the FRT and Racke algorithms. Most of the presentation will be based on a manuscript of [Andersen-Feige 2009] and slides of Feige.
[Fakcharoenphol-Rao-Talwar 2004]
A tight bound on approximating arbitrary metrics by tree metrics
research.microsoft.com/pubs/74347/06-journal.pdf
[Racke 2008]
Optimal hierarchical decompositions for congestion minimization in networks
http://dl.acm.org/citation.cfm?id=1374415
[Andersen-Feige 2009]
Interchanging distance and capacity in probabilistic mappings
http://arxiv.org/abs/0907.3631
www.wisdom.weizmann.ac.il/~feige/Slides/CapacityMapping.ppt

February 12, Silas Richelson (UCLA) Topology-Hiding Computation

Secure Multi-party Computation (MPC) is one of the foundational achievements of modern cryptography, allowing multiple, distrusting, parties to jointly compute a function of their inputs, while revealing nothing but the output of the function. Following the seminal works of Yao and Goldreich, Micali, Wigderson and Ben-Or, Goldwasser, Wigderson, the study of MPC has expanded to consider a wide variety of questions, including variants in the attack model, underlying assumptions, complexity and composability of the resulting protocols. One question that appears to have received very little attention, however, is that of MPC over an underlying communication network whose structure is, in itself, sensitive information. This question, in addition to being of pure theoretical interest, arises naturally in many contexts: designing privacy-preserving social-networks, private peer-to-peer computations, vehicle-to-vehicle networks and the internet of things'' are some of the examples. In this paper, we initiate the study of topology-hiding computation'' in the computational setting. We give formal definitions in both simulation-based and indistinguishability-based flavors. We show that, even for fail-stop adversaries, there are some strong impossibility results. Despite this, we show that protocols for topology-hiding computation can be constructed in the semi-honest and fail-stop models, if we somewhat restrict the set of nodes the adversary may corrupt. Full Version: http://eprint.iacr.org/2014/1022.pdf

January 29, Ravi Sundaram (Northeastern University) Secure and scalable match: overcoming the universal circuit bottleneck using group programs

Confidential Content-Based Publish/Subscribe (C-CBPS) is an interaction model that allows parties to exchange content while protecting their security and privacy interests. In this paper we advance the state of the art in C-CBPS by showing how all predicate circuits in NC1 (logarithmic-depth, bounded fan-in) can be confidentially computed by a broker while guaranteeing perfect information-theoretic security. Previous work could handle only strictly shallower circuits (e.g. those with depth $O(\sqrt{\lg n})$\cite{SYY99, V76}. We present three protocols---UGP-Match, FSGP-Match and OFSGP-Match---based on 2-decomposable randomized encodings of group programs for circuits in NC1. UGP-Match is conceptually simple and has a clean proof of correctness but its running time is a polynomial with a high exponent and hence impractical. FSGP-Match uses a fixed structure'' construction that reduces the exponent of the complexity drastically and achieves efficiency and scalability. OFSGP-Match optimizes the group programs further to shave a linear factor. Joint work with Rajesh Krishnan of Cosocket LLC.

January 22, Vanishree Rao (UCLA) Adaptive Multiparty Non-Interactive Key Exchange without Setup in the Standard Model

Non-interactive key exchange (NIKE) is a fundamental notion in Cryptography. This notion was introduced by Diffie and Hellman in 1976. They proposed the celebrated 2-party NIKE protocol and left open as a fascinating question, whether NIKE could be realized in the multiparty setting. NIKE has since then been an active area of research with an ultimate goal of obtaining best possible security in the multiparty setting. Although this has evaded researchers for many decades, advancements have been made through relaxations in multiple directions such as restricting to 3-parties, static/semi-static model (where the adversary needs to commit to the set of parties he wishes to be challenged upon ahead of time), random-oracle model, allowing initial setup, etc.

In this talk, we shall see the first multiparty NIKE protocol that is adaptively secure and in the standard model. (Time permitting, we shall also see how to remove the setup.)

Our construction is based on indistinguishability obfuscation and obliviously-patchable puncturable pseudorandom functions, a new notion that we introduce.

We shall witness employing a novel technique of using indistinguishability obfuscation, which is interesting in its own right and which I believe would find wider applications in other settings. One such technique pertains overcoming, the somewhat inherent, drawback of non-adaptivity of the puncturing technique introduced by Sahai and Waters [STOC'14]. Central to this technique is our new notion of obliviously-patchable puncturable pseudorandom functions.

Paper minus talk: In my paper, you would also find a concrete construction of these special pseudorandom functions using multilinear maps and their recent approximations -- the leveled-graded encoding schemes. Note that pseudorandom functions amount to an interactive assumption. The paper also establishes via a meta-reduction technique that, in natural settings, an interactive assumption is necessary (even with setup).

January 15, Marcos Kiwi (University of Chile) Determination and Prediction of Infinite Words by Automata

Random hyperbolic graphs were recently introduced as a model for large networks. Their rigorous study was initiated using the following model: for $\alpha>1/2$, $C>0$, set $R=2\ln n+C$ and define the graph $G=(V,E)$ on $n$ nodes as follows: For each $v\in V$, generate i.i.d.~polar coordinates $(r_v,\theta_v)$ using the joint density function $f(r,\theta)$, with $\theta_v$ chosen uniformly from $[0,2\pi)$ and $r_{v}$ with density $f(r)=\frac{\alpha\sinh(\alpha r)}{\cosh(\alpha R)-1}$ for $0\leq r< R$. Then join two vertices by an edge, if their hyperbolic distance is at most $R$. Our main result establishes that for the relevant $\alpha$ range, a.a.s.~for any two vertices of the same component, their graph distance is $O(\log^{C_0+1+o(1)})$, where $C_0=2/(\frac{1}{2}-\frac{3}{4}\alpha+\frac{\alpha^2}{4})$, thus answering a question raised by~Gugelmann et al.~(2012) concerning the diameter of such random graphs.

### Fall 2014 Talk Abstracts

December 11, Tim Smith (Northeastern University) Determination and Prediction of Infinite Words by Automata

The talk will have two parts. The first part is drawn from my paper "On Infinite Words Determined by Stack Automata" which appeared at FSTTCS 2013. In this paper we characterize the infinite words determined by various types of automata. We say that an infinite language L (recognized by an automaton) determines an infinite word α if every string in L is a prefix of α. We consider the infinite words determined by finite automata, pushdown automata, stack automata (a generalization of pushdown automata in which the stack head can move up and down the stack in read-only mode), and multihead finite automata.

In the second part of the talk, I will discuss unpublished work on prediction of infinite words. Here we consider an “emitter” (generator) and a “predictor” (guesser). The emitter takes no input, but just outputs an infinite word one symbol at a time. The predictor receives each symbol output by the emitter, and tries to guess the next symbol before it appears. We say that the predictor “masters” the emitter if there is a point after which all of the predictor’s guesses are correct. We consider various types of automata as predictors and see which classes of infinite words they can predict.

December 4, Elad Haramaty (Northeastern University) Linearity Testin

We talk about a "classical" result on linearity testing (Blum-Luby-Rubinfeld) which has many applications throughout theoretical CS.

A function f is linear if f(x)+f(y)=f(x+y). In this talk, we will show how can one verify that a function is linear (whp) using only 3 queries to the function. This is the most fundamental result in the field of "algebraic property testing". In the talk we will possibly show more than one proof and maybe some applications and generalizations of this result.

Novemver 20, Scott Roche (Northeastern University) Enabling Efficient and Robust Distributed Computation in Highly Dynamic Networks

Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy churn (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm.

Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a constant degree graph with high expansion even under continuous high adversarial churn. Our protocol can tolerate a churn rate of up to O(n/ polylog(n)) per round (where n is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only O(polylog(n)) overhead for topology maintenance. As shown by previous work, the given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.

This a joint work of John Augustine, Gopal Pandurangan, Peter Robinson, Scott Roche, and Eli Upfal.

November 13, Zahra Jafargholi (Northeastern University) : Tamper Detection and Continuous Non-Malleable Codes

We consider a public and keyless code $(\Enc,\Dec)$ which is used to encode a message $m$ and derive a codeword $c = \Enc(m)$. The codeword can be adversarially tampered via a function $f \in \F$ from some tampering function family'' $\F$, resulting in a tampered value $c' = f(c)$. We study the different types of security guarantees that can be achieved in this scenario for different families $\F$ of tampering attacks.

Firstly, we initiate the general study of tamper-detection codes, which must detect that tampering occurred and output $\Dec(c') = \bot$. We show that such codes exist for any family of functions $\F$ over $n$ bit codewords, as long as $|\F| < 2^{2^n}$ is sufficiently smaller than the set of all possible functions, and the functions $f \in \F$ are further \emph{restricted} in two ways: (1) they can only have a few fixed points $x$ such that $f(x)=x$, (2) they must have high entropy of $f(x)$ over a random $x$. Such codes can also be made efficient when $|\F| = 2^{\poly(n)}$. For example, $\F$ can be the family of all low-degree polynomials excluding constant and identity polynomials. Such tamper-detection codes generalize the algebraic manipulation detection (AMD) codes of Cramer et al. (EUROCRYPT '08).

Next, we revisit non-malleable codes, which were introduced by Dziembowski, Pietrzak and Wichs (ICS '10) and require that $\Dec(c')$ either decodes to the original message $m$, or to some unrelated value (possibly $\bot$) that doesn't provide any information about $m$. We give a modular construction of non-malleable codes by combining tamper-detection codes and leakage-resilient codes. This gives an alternate proof of the existence of non-malleable codes with optimal rate for any family $\F$ of size $|\F| < 2^{2^n}$, as well as efficient constructions for families of size $|\F| = 2^{\poly(n)}$.

Finally, we initiate the general study of continuous non-malleable codes, which provide a non-malleability guarantee against an attacker that can tamper a codeword multiple times. We define several variants of the problem depending on: (I) whether tampering is persistent and each successive attack modifies the codeword that has been modified by previous attacks, or whether tampering is non-persistent and is always applied to the original codeword, (II) whether we can self-destruct'' and stop the experiment if a tampered codeword is ever detected to be invalid or whether the attacker can always tamper more. In the case of persistent tampering and self-destruct (weakest case), we get a broad existence results, essentially matching what's known for standard non-malleable codes. In the case of non-persistent tampering and no self-destruct (strongest case), we must further restrict the tampering functions to have few fixed points and high entropy. The two intermediate cases correspond to requiring only one of the above two restrictions.

These results have applications in cryptography to related-key attack (RKA) security and to protecting devices against tampering attacks without requiring state or randomness.

joint work with Daniel Wichs

November 7, Michael Kearns (University of Pennsylvania) : Games, Networks, and People

Beginning with the introduction of graphical games and related models, there is now a rich body of algorithmic connections between probabilistic inference, game theory and microeconomics. Strategic analogues of belief propagation and other inference techniques have been developed for the computation of Nash, correlated and market equilibria, and have played a significant role in the evolution of algorithmic game theory over the past decade.

There are also important points of departure between probabilistic and strategic graphical models - perhaps most notably that in the latter, vertices are not random variables, but self-interested humans or organizations. It is thus natural to wonder how social network structures might influence equilibrium outcomes such as social welfare or the relative wealth and power of individuals. One logical path that such questions lead to is human-subject experiments on strategic interaction in social networks.

November 6, Ryo Nishimaki (NTT Secure Platform Laboratories) : How to Watermark Cryptographic Fucntions

cryptographic functions. Informally speaking, a digital watermarking scheme for cryptographic functions embeds information, called a mark, into functions such as one-way functions and decryption functions of public-key encryption.

There are two basic requirements for watermarking schemes.
(1) A mark embedded function must be functionally equivalent to the original function.
(2) It must be difficult for adversaries to remove the embedded mark without damaging the original functionality.
In spite of its importance and usefulness, there have only been a few theoretical works on watermarking for functions (or programs). Furthermore, we do not have rigorous and meaningful definitions of watermarking for cryptographic functions and concrete constructions.
To solve the above problem, we introduce a notion of watermarking for cryptographic functions and define its security. Furthermore, we present a cryptographic function (concretely, lossy trapdoor function) based on a standard number theoretic assumption (called decisional linear assumption) and a watermarking scheme for the function. Our watermarking scheme is secure under the assumption in the standard model.

October 30, Hamid Jahanjou (Northeastern University) : Bi-Lipschitz Bijection between the Boolean Cube and the Hamming Ball

By Itai Benjamini, Gil Cohen, Igor Shinkar

We construct a bi-Lipschitz bijection from the Boolean cube to the Hamming ball of equal volume. More precisely, we show that for all even n there exists an explicit bijection f from the n-dimensional Boolean cube to the Hamming ball of equal volume embedded in (n+1)-dimensional Boolean cube, such that for all x and y it holds that distance(x,y) / 5 <= distance(f(x),f(y)) <= 4 distance(x,y) where distance(,) denotes the Hamming distance. In particular, this implies that the Hamming ball is bi-Lipschitz transitive.

This result gives a strong negative answer to an open problem of Lovett and Viola [CC 2012], who raised the question in the context of sampling distributions in low-level complexity classes. The conceptual implication is that the problem of proving lower bounds in the context of sampling distributions will require some new ideas beyond the sensitivity-based structural results of Boppana [IPL 97]. We study the mapping f further and show that it (and its inverse) are computable in DLOGTIME-uniform TC0, but not in AC0. Moreover, we prove that f is "approximately local" in the sense that all but the last output bit of f are essentially determined by a single input bit.

October 23, Pratyay Mukherjee (Aarhus University) : Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based

By Craig Gentry, Amit Sahai and Brent Waters

We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving “relinearization”. In this work, we propose a new technique for building FHE schemes that we call the approximate eigenvector method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand. In previous schemes, the homomorphic evaluator needs to obtain the user’s “evaluation key”, which consists of a chain of encrypted secret keys. Our scheme has no evaluation key. The evaluator can do homomorphic operations without knowing the user’s public key at all, except for some basic parameters. This fact helps us construct the first identity-based FHE scheme. Using similar techniques, we show how to compile a recent attribute-based encryption scheme for circuits by Gorbunov et al. into an attribute-based FHE scheme that permits data encrypted under the same index to be processed homomorphically.

### Spring 2014 Talk Abstracts

April 30, Eric Miles (Northeastern University) : Towards Bridging Theory and Implementation of Cryptographic Primitives

It has been widely observed that there is a significant gap between the way that many cryptographic primitives are implemented and attacked in practice, and the corresponding theoretical constructions and analyses. In this dissertation we study the construction of cryptographic primitives, with an eye towards bridging this gap.

We first study the fundamental task of generating large amounts of random data from a short initial random seed. Theoretical constructions in this area are known as pseudorandom functions (PRFs), and despite their importance there is a gap in both efficiency and methodology when compared to practical implementations. We construct several new candidate PRFs inspired by the substitution-permutation network paradigm, which is widely used in practice but has not previously been used to construct asymptotically-secure candidate PRFs. We show that our candidates are computable more efficiently than previous candidates in a variety of computational models.

We next study the construction of arbitrary cryptographic primitives when the adversary can obtain more information than what is afforded by the traditional "black box" model. This line of research, known as leakage-resilient cryptography , is motivated by the many so-called "side-channel attacks" that exploit implementation properties rather than the algorithm alone. As a general result, we show how to efficiently compile any algorithm into a leakage-resilient algorithm that computes the same function and is secure even in this stronger model. The security of our construction is derived from new lower bounds for computing iterated group products over the alternating group. M oreover, our construction has the potential to unify previously disjoint lines of work on this problem.

April 23, John Augustine (IIT Madras) : Storage and Search in Dynamic Peer-to-Peer Networks

Peer-to-Peer (P2P) networks are highly dynamic decentralized networks that experience heavy node churn, i.e., nodes join and leave the network continuously over time. We model such P2P systems as synchronous dynamic networks. In each round, an adversary can add and remove a large number of nodes, and also rewire the network subject to some connectivity constraints. We are interested in solving the problem of storing and searching data items despite such high churn rate and network dynamism. In the course of solving this problem, we develop a random walks based sampling technique to sample nodes uniformly at random from the network. While it is well known that random walks are useful for sampling, their application in our context is nontrivial because the churn and network dynamism can potentially bias them or even destroy them. Furthermore, we believe that this sampling technique may prove to be useful in a variety of other applications.

More details can be found in our paper âStorage and Search in Dynamic Peer-to-Peer Networks,â joint with Anisur Molla, Ehab Morsy, Gopal Pandurangan, Peter Robinson, and Eli Upfal. This paper was presented in SPAA 2013.

April 17, Cristopher Moore (Santa Fe Institute) : Finding Communities in Networks: Message-passing, Phase Transitions, and New Spectral Methods

Detecting communities is an important problem in the study of networks. Recently, we developed scalable Belief Propagation algorithms that update probability distributions of node labels until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a "detectability transition" in the stochastic block model, below which no algorithm can label nodes better than chance, or even tell with high probability whether or not communities exist.

I'll explain this transition, and give an accessible introduction to Belief Propagation and the analogy with "free energy" and the cavity method of statistical physics. We'll see that the consensus of many good solutions is a better labeling than the "best" solution --- something that is true for many real-world optimization problems.

I'll then turn to spectral methods. It is popular to classify nodes according to the first few eigenvectors of the adjacency matrix or the graph Laplacian. However, in the sparse case these operators get confused by localized eigenvectors, focusing on high-degree nodes or dangling trees rather than large-scale communities. As a result, they fail significantly above the detectability transition. I will describe a new spectral algorithm based on the "non-backtracking matrix," which avoids these localized eigenvectors: it is optimal in the sense that it succeeds all the way down to the transition.

This is joint work with Aurelien Decelle, Florent Krzakala, Elchanan Mossel, Joe Neeman, Allan Sly, Lenka Zdeborova, and Pan Zhang.

April 15, Eric Blais (MIT) : Sublinear-time Algorithms for Massive Datasets

Massive datasets are becoming pervasive in science and in industry. Designing algorithms and computational systems that can deal with these datasets is one of the great challenges of computer science over the coming decades.

One particularly promising tool for meeting this challenge is the rapidly developing field of sublinear-time algorithms--algorithms whose running times are asymptotically smaller than the size of their inputs. The construction of such algorithms, however, poses a unique challenge: to run in sublinear time, an algorithm must only inspect a tiny fraction of its input data before producing its output.

In this talk, we will introduce new sublinear-time algorithms for fundamental problems on massive datasets, we will present new techniques for establishing the limits of such algorithms, and we will explore connections between this field of research and other areas of computer science.

March 27, Jonathan Ullman (Harvard University) : Privacy and the Complexity of Simple Queries

The goal of differentially private data analysis is to design algorithms for analyzing datasets while ensuring that sensitive information about individuals is not revealed. In this talk I will present both new lower bounds and new algorithms for differentially private data analysis. On the negative side, I will present some new, nearly-optimal lower bounds on the amount of data required to release differentially private statistics on high-dimensional datasets. These results show that there is a significant "price of differential privacy" in high-dimensional datasets. We prove these lower bounds using a cryptographic primitive called a fingerprinting code that we show is closely connected to differentially private data analysis. On the positive side, I will present efficient algorithms for computing differentially private contingency tables, using techniques from computational learning theory.

March 24, Lap Chi Lau (The Chinese University of Hong Kong) : Algebraic Techniques in Fundamental Graph Problems

Graph partitioning, graph connectivity, and network design are fundamental graph problems with applications in various areas of computer science and engineering. Designing good algorithms for these problems has received considerable attention in theoretical computer science in the past decades. In this talk, I will show that some simple algebraic ideas are surprisingly powerful in studying these combinatorial problems, from new analyses of existing algorithms to the design of faster algorithms and better approximation algorithms.

- In practice, the spectral method is one of the most popular heuristics for graph partitioning with good empirical performance in applications. In theory, however, its worst case performance is poor, and it has been an open question to explain this discrepancy. We answer this question by giving a new analysis of the spectral method using higher eigenvalues of graphs.

- Network coding is a novel method for information transmission in computer networks. We show that this method can be used to design faster algorithms for computing graph connectivity. Interestingly, the ideas can also be used to design faster algorithms for computing matrix rank, showing some interactions between ideas in graph theory and linear algebra.

- We present a general iterative method to design near-optimal approximation algorithms for many network design problems and beyond.

The key idea is to use linear independence to analyze the extreme point solutions of the linear programming relaxations.

### Fall 2013 Talk Abstracts

December 11, Edmund Yeh (Northeastern University) : Polar Codes for Multiple Access Channels

Achieving the fundamental capacity limits of communication channels with low complexity coding schemes has been a major challenge for over 60 years. Recently, a revolutionary coding construction, called Polar coding, has been shown to provably achieve the "symmetric capacity" of binary-input, memoryless single-user channels. The underlying principle of the technique is to convert repeated uses of a given single-user channel to single uses of a set of extremal channels, whereby almost every channel in the set is either almost perfect, or almost useless. The latter phenomenon is referred to as polarization.

Whereas a number of practical coding constructions (e.g. Turbo codes and Low Density Parity Check codes) can empirically approach the capacity of single-user communication channels, there is still a lack of good practical coding schemes for multi-user communication channels. In this talk, we extend the polar coding method to two-user multiple-access communication channels. We have shown that if the two users use the channel combining and splitting construction, the resulting multiple-access channels will polarize to one of five possible extremals, on each of which uncoded transmission is optimal. Our coding technique can achieve some of the optimal transmission rate pairs obtained with uniformly distributed inputs. The encoding and decoding complexity of the code is O(n log n) with n being the block length, and the block error probability is roughly O(2^{-\sqrt{n}}). Our construction is one of the first low-complexity coding schemes which have been proved to achieve capacity in multi-user communication networks.

Joint work with Eren Sasoglu (UC Berkeley) and Emre Telatar (EPFL).

December 4, Karthikeyan Chandrasekaran (Harvard University) : A Polynomial-Time Cutting Plane Algorithm for Matching

The cutting plane approach to optimal matchings has been discussed by several authors over the past decades [Padberg-Rao, Grotschel-Holland, Lovasz-Plummer, Trick, Fischetti-Lodi], and its convergence has been an open question. We give a cutting plane algorithm for the minimum-cost perfect matching problem using Edmonds' blossom inequalities as cuts and prove polynomial convergence of this algorithm.

Our main insight is an LP-based method to retain/drop candidate cutting planes. Our cut-addition is based on maintaining laminarity. This leads to a sequence of intermediate linear programs with a linear number of constraints whose optima are half-integral and supported by a disjoint union of odd cycles and edges. This structural property of the optima is instrumental in finding violated blossom inequalities (cuts) in linear time. The number of cycles in the support of the half-integral optima acts as a potential function to show efficient convergence to an integral solution.

Joint work with Laszlo Vegh and Santosh Vempala.

November 20, Mahdi Cheraghchi (MIT) : Non-Malleable Coding Against Bit-wise and Split-State Tampering

Non-malleable coding, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), aims for protecting the integrity of information against tampering attacks in situations where error-detection is impossible. Intuitively, information encoded by a non-malleable code either decodes to the original message or, in presence of any tampering, to an unrelated message. Non-malleable coding is possible against any class of adversaries of bounded size. In particular, Dziembowski et al. show that such codes exist and may achieve positive rates for any class of tampering functions of size at most exp(2^(c n)), for any constant c < 1. However, this result is existential and has thus attracted a great deal of subsequent research on explicit constructions of non-malleable codes against natural classes of adversaries.

In this work, we consider constructions of coding schemes against two well-studied classes of tampering functions; namely, bit-wise tampering functions (where the adversary tampers each bit of the encoding independently) and the much more general class of split-state adversaries (where two independent adversaries arbitrarily tamper each half of the encoded sequence). We obtain the following results for these models.

1. For bit-tampering adversaries, we obtain explicit and efficiently encodable and decodable non-malleable codes of length n achieving rate 1-o(1) and negligible error (also known as "exact security"). Alternatively, it is possible to improve the error to nearly exponentially small at the cost of making the construction Monte Carlo with exponentially small failure probability (while still allowing a compact description of the code). Previously, the best known construction of bit-tampering coding schemes was due to Dziembowski et al. (ICS 2010), which is a Monte Carlo construction achieving rate close to .1887.

2. We initiate the study of seedless non-malleable extractors as a natural variation of the notion of non-malleable extractors introduced by Dodis and Wichs (STOC 2009). We show that construction of non-malleable codes for the split-state model reduces to construction of non-malleable two-source extractors. We prove a general result on existence of seedless non-malleable extractors, which implies that codes obtained from our reduction can achieve rates arbitrarily close to 1/5 and exponentially small error. In a separate recent work, the authors show that the optimal rate in this model is 1/2. Currently, the best known explicit construction of split-state coding schemes is due to Aggarwal, Dodis and Lovett (ECCC TR13-081) which only achieves vanishing (polynomially small) rate.

Based on a joint work with Venkatesan Guruswami (arXiv:1309.1151).

October 30, Chen Avin (Ben Gurion University): From Caesar to Twitter - An Axiomatic Approach to Elites of Social Networks

In many societies there is an elite, a relatively small group of powerful individuals that is well connected and highly inﬂuential. Since the ancient days of Julius Caesar’s senate to the recent days of celebrities on Twitter, the size of the elite is a result of conflicting social forces competing to increase or decrease it.

The main contribution of this paper is the answer to the novel question about the size of the elite in equilibrium. We take an axiomatic approach to solve this: assuming that elite exists and it is influential, stable and either of minimal or dense we prove that its size must be of size Θ(√m) (where m is the number of edges in the network).

As an approximation for the elite we then present an empirical study of the sub-graph formed by the highest degree nodes, also known as the rich-club. Our empirical findings indicate that elite properties such as a size of Θ(√m), disproportionate influence, stability and density are universal properties and should join an increasing list of common phenomenon that complex systems share such as “small world”, power law degree distributions, high clustering, etc.

Joint work with Zvi Lotker, Yvonne-Anne Pignolet and Itzik Turkel

October 23, Chin Ho Lee (Northeastern University): Limits of provable security for homomorphic encryption

We show that public-key bit encryption schemes which support weak (i.e., compact) homomorphic evaluation of any sufficiently "sensitive" collection of functions cannot be proved message indistinguishable beyond AM intersect coAM via general (adaptive) reductions, and beyond statistical zero-knowledge via reductions of constant query complexity. Examples of sensitive collections include parities, majorities, and the class consisting of all AND and OR functions.

Our techniques also give a method for converting a strong (i.e., distribution-preserving) homomorphic evaluator for essentially any boolean function (except the trivial ones, the NOT function, and the AND and OR functions) into a rerandomization algorithm: This is a procedure that converts a ciphertext into another ciphertext which is statistically close to being independent and identically distributed with the original one. Our transformation preserves negligible statistical error.

This is a joint work with Andrej Bogdanov.

October 16, Abhinav Kumar (MIT): Inverse problems in potential energy minimization, and algorithms for self-assembly

The question of energy minimization is widely studied in physics and mathematics - to find a configuration of points which minimize a given form of potential energy. I will describe some progress on the inverse problem of contructing a potential function which has a given target configuration as its (provable) global minimum.

For simplicity, consider a finite collection of points on a sphere in n dimensions, though some of the techniques apply in broader generality. I will first describe a necessary and sufficient condition for the inverse question to have a solution, and then an algorithm which attempts to construct a solution as a linear combination of a finite set of specificied potential functions.

Finally, I will illustrate the technique in the case of several symmetrical examples, such as the cube, the dodecahedron and the 120-cell. In these cases one can in fact display solutions which are decreasing and convex as a function of distance, and use linear programmming bounds and spherical design properties to give simple proofs of the global minimality.

This is joint work with Henry Cohn from Microsoft Research.

October 9, Pavel Hubacek (Aarhus University): Limits on the Power of Cryptographic Cheap Talk

We revisit the question of whether cryptographic protocols can replace correlated equilibria mediators in two-player strategic games. This problem was first addressed by Dodis, Halevi and Rabin (CRYPTO 2000), who suggested replacing the mediator with a secure protocol and proved that their solution is stable in the Nash equilibrium (NE) sense, provided that the players are computationally bounded.

We show that there exist two-player games for which no cryptographic protocol can implement the mediator in a sequentially rational way; that is, without introducing empty threats. This explains why all solutions so far were either sequentially unstable, or were restricted to a limited class of correlated equilibria (specifically, those that do not dominate any NE, and hence playing them does not offer a clear advantage over playing any NE).

In the context of computational NE, we classify necessary and sufficient cryptographic assumptions for implementing a mediator that allows to achieve a given utility profile of a correlated equilibrium. The picture that emerges is somewhat different than the one arising in semi-honest secure two-party computation. Specifically, while in the latter case every functionality is either "complete" (i.e., implies Oblivious Transfer) or "trivial" (i.e., can be securely computed unconditionally), in the former there exist some "intermediate" utility profiles whose implementation is equivalent to the existence of one-way functions.

This is a joint work with Jesper Buus Nielsen and Alon Rosen.

September 17, Seny Kamara (Microsoft Research): How to Search on Encrypted Data

The problem of searching over encrypted data arises often and, most notably, in the design of secure database systems, file systems, cloud storage systems and in the design of cryptographic protocols. Many solutions to this problem have been proposed in the past, including searchable encryption, deterministic encryption, order preserving encryption, functional encryption, oblivious RAMs, secure two-party computation and fully-homomorphic encryption.

In this talk, I will survey the different solutions to the encrypted search problem and discuss their various strengths and limitations, paying particularly close attention to the tradeoffs made between security, efficiency and functionality. I will then present recent advances in the area of searchable encryption and give an overview of the state-of-the art constructions and implementations.

### Spring 2013 Talk Abstracts

May 29, Eli Ben-Sasson (Technion): Universal and Affordable Computational Integrity, or, Succinctly, from C to PCP

Public key cryptography, invented in the 1970′s, revolutionized the world of computation by displaying a tool-box that builds trust in authenticity and integrity of data, even when it is transmitted from unknown computers and relayed via untrusted and possibly corrupt intermediaries. A celebrated line of theoretical works dating back to the 1980′s envisions a similar revolution, offering a level of trust in the integrity of arbitrary computations even when executed on untrusted and possibly malicious computers. A particularly surprising aspect of these results is succinctness which means that the running-time needed to verify a computation is only as costly as reading the program that describes it, even when this program is exponentially shorter in length than the computation’s running time!

Common belief has been that it is impractical to build a truly succinct computational-integrity protocol. We challenge this conception by describing the first full-scale implementation of it. Our system compiles programs in standard (ANSI) C into succinctly-verifiable programs, with asymptotic parameters that match the best-possible bounds predicted by theory. Joint work with Alessandro Chiesa, Daniel Genkin, Eran Tromer and Madars Virza.

May 8, Yevgeniy Dodis (New York University): Key Derivation Without Entropy Waste

We revisit the classical question of converting an imperfect source X of min-entropy k into a usable m-bit cryptographic key for some underlying application P. If P has security delta (against some class of attackers) with a uniformly random m-bit key, we seek to design a key derivation function (KDF) h that allows us to use R=h(X) as the key for P and results in comparable security delta' close to delta.

Seeded randomness extractors provide a generic way to solve this problem provided that k > m + 2*log(1/delta), and this lower bound on k (called "RT-bound") is known to be tight in general. Unfortunately, in many situation the "waste" of 2*log(1/delta) bits of entropy is significant, motivating the question of designing KDFs with less waste for important special classes of sources X or applications P. I will discuss several positive and negative results in this regard. The most surprising of them will be a positive result for all unpredictability applications P, yielding a provably secure KDF with entropy "waste" only loglog(1/delta) - an expenential improvement over the RT-bound.

April 25, Eric Miles (Northeastern University): Foundations of cryptography: Towards bridging theory and implementation

It has been widely observed that there is a significant gap between the way that many cryptographic primitives are constructed and analyzed by the theory/foundations community, and the way that they are implemented, used, and attacked in practice. In this work we study from two different perspectives the complexity of constructing cryptographic primitives, with an eye towards bridging this gap.

We first study the complexity of constructing pseudorandom functions using a certain paradigm known as the substitution-permutation network (SPN). Informally, a pseudorandom function family (PRF) is a small set of functions F such that a function chosen at random from F is indistinguishable from a truly random function by a computationally-bounded adversary. The SPN paradigm is widely used in practical cryptographic constructions, such as the Advanced Encryption Standard [Daemen & Rijmen 2000], but has not previously been used to construct candidate PRFs. We construct several new candidate PRFs inspired by the SPN paradigm, and show that they are computable more efficiently than previous candidates in a variety of computational models.

We then study the complexity of constructing arbitrary cryptographic primitives in an attack model in which the adversary obtains more information than just the input/output behavior afforded by oracle access. This line of research is motivated by the many so-called side-channel attacks that exploit implementation properties, i.e. properties of the hardware on which an algorithm is run, rather than the algorithm alone. As a general result, we show how to efficiently compile any given circuit C into a so-called "leakage resistant" circuit C' that computes the same function and has the additional property that any function on the wires of C' that leaks information during a computation C'(x) yields advantage in computing products over the alternating group A_u. In combination with new compression bounds for A_u products which we also obtain, C' withstands leakage from virtually any class of functions against which average-case lower bounds are known.

April 9, Daniel Wichs (Northeastern University): Cryptography with imperfect secrets

Most cryptographic primitives require secret keys, and we usually assume that these can be generated uniformly at random and kept perfectly secret from an attacker. Nevertheless, there are many scenarios where we need to rely on "imperfect" secret keys for cryptography. This includes passwords and biometrics, which are not uniformly random, but come from some unspecified distribution about which we know very little. It also includes scenarios where partial information about the secret key can leak to an attacker, for example, through various "side-channel attacks". Such attacks are being used to break real-world implementations of current cryptosystems, and are the major source of difficulty when implementing cryptography in practice. My talk will explore new methods for constructing cryptosystems that remain provably secure without requiring the perfect secrecy of their secret keys.

March 1, Yevgeniy Dodis (New York University): Overcoming Weak Expectations

Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some non-trivial amount of (min-)entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), where X is a weak source in question. We show an elementary inequality which essentially upper bounds such "weak expectation" by two terms, the first of which is *independent* of f, while the second only depends on the variance of f under the *uniform* distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some "unexpected" results, in several cases noticeably simplifying/improving prior techniques for the same problem. Examples include non-malleable extractors, leakage-resilient symmetric encryption, seed-dependent condensers, improved entropy loss for the leftover hash lemma, and alternative to the dense model theorem.

Feb 4, Eric Miles (Northeastern University): Shielding circuits with groups

Traditionally, cryptography models an adversary as having only input/output access to a given algorithm. A recent line of work known as leakage-resistant cryptography additionally gives the adversary the output of a computationally limited leakage function applied to the algorithm's internal state (e.g. to the wires of a circuit computing the function). A general goal in this area is to compile any circuit into a new "shielded" circuit that remains secure under these attacks. In this work we give a new such compiler, producing shielded circuits that withstand leakage from virtually any class of functions against which average-case lower bounds are known, recovering and extending previous results. We also conjecture that it withstands NC^1-leakage if NC^1 is not equal to L.
We build on previous constructions by Ishai et al. [Crypto ’03] and Faust et al. [Eurocrypt ’10]. We use and extend the relationship between group theory and computation first established by Barrington [STOC '86]. In particular we exploit properties of the alternating group beyond what is sufficient for Barrington's theorem.
This is joint work with Emanuele Viola.

### Fall 2012 Talk Abstracts

Oct 1, Eric Miles (Northeastern University): Substitution-permutation networks, pseudorandom functions, and natural proofs

There currently exists a troubling gap between the construction of pseudorandom functions (PRF) and those of their popular, bounded-input-length counterparts including block ciphers and hash functions. This gap is both quantitative, because these counterparts are more efficient than PRF in various ways, and methodological, because these counterparts are often constructed via the substitution-permutation network paradigm (SPN) which has not been used to construct PRF. We take a step towards closing this gap by giving new candidate PRF that are inspired by the SPN paradigm. This paradigm involves a "substitution function" (S-box). Our main candidates are
1. An SPN whose S-box is a random function on b bits, given as part of the seed. We prove unconditionally that this candidate resists attacks that run in time at most 2^(Omega(b)). Setting b = omega(log n) we obtain an inefficient PRF, which seems to be the first such construction using the SPN paradigm.
2. An SPN whose S-box is field inversion, a common choice in practical constructions. This candidate is computable with Boolean circuits of size n * polylog(n), and we prove that it has exponential security against linear and differential cryptanalysis.
3. A candidate using an extreme setting of the SPN parameters (one round, one S-box), where the S-box is again field inversion. This candidate is also computable by circuits of size n * polylog(n), and we prove that it has exponential security against parity tests that look at 2^(0.9n) outputs.
Assuming the security of our candidates, our work narrows the gap between the "Natural Proofs barrier" of Razborov and Rudich and existing lower bounds, in three models: unbounded-depth circuits, TC0 circuits, and Turing machines.
This is joint work with Emanuele Viola.

### Spring 2012 Talk Abstracts

Mar 19, Thomas Vidick (MIT): Noncommutative Grothendieck inequalities and application to quantum two-player games

The celebrated Grothendieck's inequality is a fundamental result in Banach space theory that has recently found surprising applications in different areas, ranging from theoretical computer science (e.g. work of Alon and Naor showing how it could be used to derive a constant-factor approximation algorithm for the cut-norm), to quantum information theory (e.g. work of Tsirelson using it to derive upper bounds on the maximal violation of certain Bell inequalities by quantum mechanics).
In this talk I will briefly survey these results, before discussing a generalization of Grothendieck's inequality to non-commutative variables due to Pisier and Haagerup. I will motivate this inequality by showing that, just as its commutative counterpart, it leads to an efficient constant-factor approximation algorithm for a certain hard optimization problem. I will also describe a new application of the non-commutative Grothendieck's inequality to the computational complexity of quantum two-player games.
Based on joint work with Oded Regev.

Mar 13, Eric Miles (Northeastern University): Leakage-resistant cryptography

The standard framework in cryptography generally assumes that an adversary A attacks a cryptographic device D by having access *only* to D's input/output behavior. For concreteness, one can think of D as performing encryptions and decryptions under some unknown secret key K; then A is allowed to obtain encryptions and decryptions under K of messages of its choice, but may not ask for things like the Hamming weight of K, or the parity of K, etc. However, there have been successful real-world attacks that deviate from this framework, such as the much-publicized timing attack on RSA (Kocher, 1996). In the last 10 years, researchers have begun formally addressing the problem of constructing devices D that are secure even against adversaries that may obtain some additional "leaked" information about D's internal computation. In this talk I will go over the way(s) in which this problem is formalized, and present two constructions (one classic, one very new) that achieve varying degrees of leakage-resistance. The talk should be accessible even with little to no background in cryptography.

Feb 28, Adam Smith (Pennsylvania State University): Pinning Down "Privacy" in Statistical Databases

Consider an agency holding a large database of sensitive personal information -- medical records, census survey answers, web search records, or genetic data, for example. The agency would like to discover and publicly release global characteristics of the data (say, to inform policy and business decisions) while protecting the privacy of individuals' records. This problem is known variously as "statistical disclosure control", "privacy-preserving data mining" or simply "database privacy".
In this talk, I will describe "differential privacy", a notion which emerged from a recent line of work in theoretical computer science that seeks to formulate and satisfy rigorous definitions of privacy for such statistical databases. Satisfactory definitions had previously proved elusive largely because of the difficulty of reasoning about "side information" -- knowledge available to an attacker through other channels. Differential privacy provides a meaningful notion of privacy in the presence of arbitrary side information. After explaining some attacks that motivate our approach, I will sketch some of the basic techniques for achieving differential privacy as well as recent results on differentially private statistical analysis and learning.

Feb 10, Chinmoy Dutta (Northeastern University): Strong Partitions and Universal Steiner Trees for Graphs

Given a graph G and a root node r, the Universal Steiner Tree (UST) problem seeks a single spanning tree $T$ of minimum stretch, where the stretch of T is defined to be the maximum ratio, over all subsets of terminals X, of the cost of the sub-tree $T_X$ of T that connects X to r to the cost of an optimal Steiner tree connecting X to r. UST's are important for data aggregation problems where computing the Steiner tree from scratch for every input instance of terminals is costly, as for example in low energy sensor network applications.
We provide polynomial time UST constructions with $2^{O(\sqrt{\log n})}$-stretch for general graphs, and polylogarithmic-stretch for minor-free graphs. Our work is based on a kind of strong hierarchical graph partitions.
I will outline our construction and show the connections of this problem with graph partitions. This is based on joint work with Costas Busch, Jaikumar Radhakrishnan, Rajmohan Rajaraman and Srivathsan Srinivasagopalan.

Feb 8, Nishanth Chandran (Microsoft Research, Redmond): Cryptographic protocols in the era of cloud computing

With the advent of cloud computing, our view of cryptographic protocols has changed dramatically. In this talk, I will give an overview of some of the newer challenges that we face in cloud cryptography and outline some of the techniques used to solve these problems. In particular, a few questions that I will address are:
1) How can we store sensitive data in the cloud, in an encrypted manner, and yet allow controlled access to certain portions of this data?
2) How can we ensure reliability of data across cloud servers that may be connected by only a low-degree communication network, even when some of the servers may become corrupted?
3) How can users authenticate themselves to the cloud in a user-friendly way?
This talk will assume no prior knowledge of cryptography and is based on works that appear at TCC 2012, ICALP 2010 and STOC 2010.

Jan 30, Daniel Wichs (IBM Research, T.J. Watson Center): Maintaining Security Under Imperfect Secrecy

Cryptography requires secret keys, and we usually assume that these can be generated uniformly at random and kept perfectly secret from an attacker. Nevertheless, there are many scenarios where we need to rely on "imperfect" secret keys for cryptography. This includes passwords and biometrics, which are not uniformly random, but come from some unspecified distribution about which we know very little. It also includes scenarios where partial information about the secret key can leak to an attacker, for example, through various "side-channel attacks". Such attacks are being used to break real-world implementations of current cryptosystems, and are the major source of difficulty when implementing cryptography in practice. My talk will explore new methods for constructing cryptosystems that remain provably secure without requiring the perfect secrecy of their secret keys.

Jan 23, Scott Roche (Northeastern University): The spread of diseases on networks

Consider the following model. An infectious disease (or some other item) is to be transmitted on a graph G(V, E), starting from a single vertex. Once a vertex is infected, it stays infected for some time t. An uninfected vertex can be infected by each of its infected neighbors with rate B. Once a vertex is no longer infected, it can be infected again. It is known that the epidemic will die out in finite time, and that the value of the ratio B/t determines many of the properties of the epidemic, such as whether it will die out quickly or in exponential time, as well as the fraction of vertices infected at any given time. However, to date there have been no rigorous results for the questions of 1) How long will it take for the epidemic to infect a constant fraction, e|V| of the vertices? 2) Assuming the epidemic persists in a meta-stable state, will every node be infected at some point, and how long will this take? We will introduce a related process, in which an infected vertex infects anywhere from 1 to k vertices by sampling from its neighbors without replacement, and show cover time results for this process. We will then attempt to extend the results for the related process to the original SIS epidemic model.

Jan 17, Dov Gordon (Columbia University): Secure Computation: From Theory Towards Practice

In 1982, Yao introduced the field of "secure computation", in which n parties, holding private inputs x_1,...,x_n, engage in a protocol to compute some function f(x_1,...,x_n), while revealing nothing more than the output. In the decade that followed, the topic of secure computation was thoroughly explored, and almost every theoretical question was answered. In the past decade, the focus has shifted towards improving efficiency, and building implementations. Today, secure computation is poised to become an extremely powerful tool with wide-ranging application. Both bodies of research were essential in bringing this about.
In this talk, we review several recent results. We first provide insight into one of the few remaining theoretical questions in secure computation. We then demonstrate improved efficiency for secure computation in several particular settings of interest. On the theoretical side, we discuss the problem of "fairness" in secure computation, which is a security property guaranteeing simultaneous output delivery to both parties. Until recently, this was widely believed to be impossible to achieve; we demonstrate that some interesting functions can in fact be computed with complete fairness. In the second half of the talk, we will focus on several settings that arise in more modern, real-world environments, and show how we can tailor the theoretical results to greatly improve efficiency. In presenting this selection of research, we hope to demonstrate the importance of both foundational and applied cryptographic theory.

### Fall 2011 Talk Abstracts

Dec 15, Chinmoy Dutta (Northeastern University): More on a problem of Zarankiewicz

We show tight necessary and sufficient conditions on the sizes of small bipartite graphs whose union is a larger bipartite graph that has no large bipartite independent set. Our main result is a common generalization of two classical results in graph theory: the theorem of Kővári, Sós, and Turán on the minimum number of edges in a bipartite graph that has no large independent set, and the theorem of Hansel (also Katona and Szemerédi and Krichevskii) on the sum of the sizes of bipartite graphs that can be used to construct a graph (non-necessarily bipartite) that has no large independent set. Joint work with Jaikumar Radhakrishnan (Tata Institute of Fundamental Research, Mumbai)

Dec 1, Eric Miles (Northeastern University): Pseudorandom generators, cryptography, and P vs. NP

A pseudorandom generator is an algorithm that outputs a large amount of "random-looking" data, while taking as input only a small amount of truly random data. In this talk I will explain how this idea is formalized, and highlight some applications of PRGs. Specifically, I will mention one way in which PRGs are used in cryptography, and discuss the implications that PRGs have for resolving the P vs. NP question. Along the way I'll touch on some of my past and ongoing research in this area (joint with Emanuele Viola).

Nov 30, Rajmohan Rajaraman (Northeastern University): Information Spreading in Dynamic Networks: On the Power of Forwarding Algorithms

We consider information spreading (also known as gossip) in dynamic networks. In the k-gossip problem, there are k tokens that are initially present in some nodes of a network and the goal is to disseminate the k tokens to all nodes in as few rounds of distributed computation as possible. The problem is especially challenging in dynamic networks where the network topology can change from round to round and can be controlled by an adaptive adversary.
The focus of this talk is on the power of token-forwarding algorithms, which do not manipulate tokens in any way other than storing, copying, and forwarding them. We first consider a worst-case adversarial model introduced by Kuhn, Lynch, and Oshman in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. Our main result is an Omega(nk/log(n)) lower bound on the number of rounds needed for any token-forwarding algorithm to solve k-gossip. This resolves an open problem posed by Kuhn-Lynch-Oshman and, taken together with a recent work of Haeupler and Karger, establishes a nearly linear-factor worst-case gap between token-forwarding algorithms and those based on network coding.
We next show that the above "quadratic" barrier can be overcome in an offline version of the problem, where the adversary has to commit all the topology changes in advance at the beginning of the computation. We show that token-forwarding algorithms can achieve O(min{nk, n*sqrt{klog(n)}) time in the offline model. Time-permitting, we will also present a bicriteria approximation algorithm for the offline problem.
Joint work with Chinmoy Dutta (Northeastern), Gopal Pandurangan (Nanyang Technological University, Singapore), and Zhifeng Sun (Northeastern)

Nov 16, Emanuele Viola (Northeastern University): The communication complexity of addition

Each of 3 players receives an integer number of absolute value at most 2^n. The players want to know if the sum of their numbers is bigger than 0. We prove that the minimal amount of randomized communication needed is Theta(log n). Both the upper and the lower bounds were open for more than 15 years.

Nov 8, Abhinav Kumar (MIT): Inverse energy minimization problems

The inverse optimization problem for potential energy of spherical codes asks: given a spherical code C, does there exist a radial pair potential function f for which C minimizes f-potential energy? I will describe necessary and sufficient conditions, a heuristic algorithm to find the potential function in a fixed space of functions, and also give examples of some familiar configurations for which we can find natural potentials which are provably minimized at these codes. This is joint work with Henry Cohn.

Nov 8, Abhinav Kumar (MIT): Potential energy minimization and universal optimality

Yudin applied linear programming bounds to the question of energy minimization for spherical codes. I will describe an application which proves the existence of some universally optimal spherical codes (which minimize a large class of potential energy functions). The proof involves Hermite interpolation, as well as spherical design properties of these codes. I'll also illustrate a generalization of Yudin's theorem to Euclidean space and some natural conjectures. This is joint work with Henry Cohn.

Nov 8, Abhinav Kumar (MIT): Linear programming bounds in geometry and coding theory

How many points can be placed on a sphere in n dimensions, which are all separated by at least some given angular distance? This classical problem in discrete geometry is quite hard in general, although it has some remarkable answers for some special values of the dimension and angular distance. I'll talk about one of the best techniques for upper bounds, namely linear programming bounds developed by Delsarte, Levenshtein and others. These exploit techniques from representation theory/harmonic analysis. In particular, for some of the these remarkable configurations, the upper bounds are sharp, which enable one to prove uniqueness, not just optimality. I'll also indicate how linear programming bounds are used in coding theory and in the sphere packing problem, for instance, in the proof that the Leech lattice is the densest lattice packing in 24 dimensions. This talk will be fairly accessible, assuming minimal background.

### Spring 2011 Talk Abstracts

May 2, David Karger (MIT): Random Sampling and Cuts in Graphs: Three Combinatorial Proofs

I'll synthesize work showing that randomly sampling the edges of an undirected graph preserves cut values. I'll give three distinct proofs, based on random contraction, tree packing, and network reliability, that the values of cuts are tightly concentrated around their expectations when those expectations are sufficiently large. I'll give a combinatorial construction for sparsifying any n-vertex undirected graph down to O(n log n) edges with only small perturbations in cut value and show how it can be used in fast algorithms for approximate and exact maximum flow computations. While one can achieve slightly better sparsification using algebraic techniques, I believe these combinatorial methods offer useful insights and may yield more in the future. The talk is self-contained, requiring only elementary knowledge of combinatorics and probability.

Apr 29, Boaz Patt-Shamir (Tel Aviv University): Algorithmic recommender Systems

Recommender systems help users identify objects they may find interesting, where objects may be books to read, films to watch, web pages to browse, and even other users to contact. Formally, the input to the system is the known preferences of the users, as deduced somehow from their past choices. While many interesting ideas have been developed to analyze characteristics of users (or objects) based on past choices, this approach suffers from a foundational theoretical flaw: feedback is ignored. Put simply, the setting is such that choices determine recommendations, but recommendations are supposed to influence choices. In a recent line of work this gap was bridged by a simple algorithmic model that assumes that the system may ask the user's opinion on any object, and not only make recommendations about supposedly nice' objects. Typically, algorithms in this model ask users for their opinions on controversial objects, and in return, the output consists of almost complete reconstruction of user preferences. In this talk we discuss this model and survey some basic and recent results. Surprisingly, it turns out that there are algorithms that can reconstruct user preferences (with high probability), using only a little (polylog factor) more questions than the minimum possible.

Apr 22, Emanuele Viola (Northeastern Univrsity): Extractors for Circuit Sources

We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are:

(1) We extract k poly( k / n d ) bits with exponentially small error from n-bit sources of min-entropy k that are generated by functions that are d-local, i.e., each output bit depends on at most d input bits. In particular, we extract from NC-zero sources, corresponding to d = O(1).

(2) We extract k poly( k / n^(1.001) ) bits with super-polynomially small error from n-bit sources of min-entropy k that are generated by poly(n)-size AC-zero circuits.

As our starting point, we revisit the connection by Trevisan and Vadhan (FOCS 2000) between circuit lower bounds and extractors for sources generated by circuits. We note that such extractors (with very weak parameters) are equivalent to lower bounds for generating distributions (FOCS 2010; with Lovett, CCC 2011). Building on those bounds, we prove that the sources in (1) and (2) are (close to) a convex combination of high-entropy "bit-block" sources. Introduced here, such sources are a special case of affine ones. As extractors for (1) and (2) one can use the extractor for low-weight affine sources by Rao (CCC 2009).

Along the way, we exhibit an explicit n-bit Boolean function b such that poly(n)-size AC-zero circuits cannot generate the distribution (X,b(X)), solving a problem about the complexity of distributions.

Independently, De and Watson (ECCC TR11-037) obtain a result similar to (1) in the special case d = o(log n).

Mar 8, Yang Cai (MIT): On Minmax Theorems for Multiplayer Games

We provide a generalization of von Neumann's minmax theorem to zero-sum multi-player games. The games we consider are polymatrix - that is, graphical games in which every edge is a two-player game between its endpoints - in which every outcome has zero total sum of players' payoffs. Given that three-player zero-sum games are already PPAD-complete, the class of games we consider here, i.e. those with pairwise separable utility functions, defines essentially the broadest expanse of multi-player zero-sum games to which we can hope to push tractability results. Our generalization of the minmax theorem implies convexity of equilibria, polynomial-time tractability, and convergence of no-regret learning algorithms to Nash equilibria.

We also explore generalizations of our result to classes of non-zero-sum multi-player games. A natural candidate is polymatrix games with strictly competitive games on their edges. In the two player setting, such games are minmax solvable. Surprisingly we show that a polymatrix game comprising of strictly competitive games on its edges is PPAD-complete to solve, proving a striking difference in the complexity of networks of zero-sum and strictly competitive games. Finally, we look at the role of coordination in networked interactions, studying the complexity of polymatrix games with a mixture of coordination and zero-sum games on their edges. We show that finding a pure Nash equilibrium in coordination-only polymatrix games is PLS-complete; hence, computing a mixed Nash equilibrium is in PLS $\cap$ PPAD, but it remains open whether the problem is in P. If on the other hand coordination and zero-sum games are combined, we show that the problem becomes PPAD-complete, establishing that coordination and zero-sum games achieve the full generality of PPAD.

Feb 24, Joseph Mitchell (Stony Brook): Approximation Algorithms for Traveling Salesman Problem with Neighborhoods and Related Geometric Network Optimization Problems

The Euclidean travelling salesman problem with neighborhoods (TSPN) seeks a shortest path or cycle that visits a given collection of $n$ regions (neighborhoods), $R_1, R_2, ..., R_n$. The TSPN is a generalization of the classic TSP (when $R_i$ are points) that arises in several other related geometric optimization problems. We present methods that yield provable approximation guarantees for the TSPN in the plane. We also discuss some problems related to the TSPN, including the watchman route problem (compute a shortest path/cycle that allows a robot to see all of a multiply connected domain) and the relay placement problem for building a connected communication network. Key to some of the results is the $m$-guillotine technique, which gives a tool for obtaining approximation algorithms in many network optimization problems.

Feb 15, Ravi Sundaram (Northeastern University): A presentation of the Moser-Scheder derandomization of Schoning's k-SAT algorithm

We will endeavor to give a complete account of a recent result:

A Full Derandomization of Schoning´s k-SAT Algorithm by Robin A. Moser and Dominik Scheder.

At the end of the talk you will have a simple (4/3)^n deterministic algorithm and proof for 3-SAT with which to impress friends and family.

Feb 4, Amir Shpilka (Technion and Microsoft Research): Recent Results on Polynomial Identity Testing

Polynomial Identity Testing (PIT for short) is the problem of efficiently and deterministically deciding whether a given (explicitly or via black-box access) arithmetic circuit computes the zero polynomial. This is one of the most fundamental problems in computational complexity and it is strongly related to the problem of proving lower bounds for arithmetic circuits.

In this talk I will survey several results on the PIT problem. Specifically I will discuss the relation between derandomization of PIT and circuit lower bounds, explain the importance of derandomizing PIT in the bounded depth model and give an overview of the ideas behind several recent detrerministic PIT algorithms, designed for restricted circuit classes. At the end of the talk, time permitting, I will present what I find to be the most accessible important open questions in the field.

Partially based on joint works with Zeev Dvir, Zohar Karnin, Partha Mukhopadhyay, Ran Raz, Ilya Volkovich and Amir Yehudayoff.

Jan 25, Eric Miles (Northeastern University): Increasing the Stretch of Pseudorandom Generators

We examine the complexity of increasing the stretch of pseudorandom generators (PRGs). We will introduce the concept of a PRG, revisit the classic construction of "high-stretch" PRGs from "low-stretch" PRGs (which is highly sequential in nature), and discuss the (im)possibility of creating a more parallelizable construction.

The talk is based on a forthcoming paper of myself & Viola in the Theory of Cryptography Conference 2011. (http://www.ccs.neu.edu/home/enmiles/stre.pdf)

Jan 18, Chinmoy Dutta (Northeastern University): Constructive Proofs of Concentration Bounds

We discuss a very simple and elegant combinatorial proof of Chernoff-Hoeffding bound due to Impagliazzo and Kabanets. Unlike previous proofs of such bounds that used moment generating function, this proof uses a simple counting argument. Moreover the proof is constructive in the sense that if the sum of given random variables is not concentrated around the expectation, then with high probability one can efficiently find a subset of the random variables that are statistically dependent. We also discuss how this proof technique yields threshold direct product theorems from direct product theorems in computational complexity theory.

### Fall 2010 Talk Abstracts

Nov 16, Emanuele Viola (Northeastern University): Williams' Breakthrough

Last week, on November 8 2010, Ryan Williams announced the solution of an embarrassing, 20-year old open problem in Complexity Theory: NEXP is not in ACC0. http://www.cs.cmu.edu/~ryanw/acc-lbs.pdf We discuss background and his proof, which is a masterpiece of complexity-theoretic reasoning.

Nov 12, Scott Aaronson (MIT): The Computational Complexity of Linear Optics

We propose an optical experiment that might be accessible with current technology, and argue that, if the experiment succeeded, it could provide strong evidence that superpolynomial speedups are achievable via quantum computation. The experiment involves generating single-photon states, sending the photons through a linear-optical network, and then measuring the number of photons in each location. The resources we consider are not known or believed to be universal for quantum computation; nevertheless, we give evidence that they would allow the solution of classically-intractable sampling and search problems.

Oct 19, Bernhard Haeupler (MIT): New Constructive Aspects of the Lovasz Local Lemma

TBA

Oct 1, Constantinos Daskalakis (MIT): Geometrically Embedded Local Search

Recent work has shown that finding a mixed Nash equilibrium in normal form games is PPAD-complete, while the pure Nash equilibrium problem in selfish routing games is PLS-complete. On the other hand, there are important classes of games, e.g. simple stochastic games, and fixed point problems, e.g. fixed points of contraction maps, that appear easier. For these problems the PPAD and PLS machinery seems inadequate to provide an accurate complexity theoretic characterization; at the same time, polynomial-time algorithms have been evading researchers for decades. We provide complexity theoretic machinery that could lead to completeness results at the interface of PPAD and PLS, in the form of geometrically embedded local search. We also survey background results on the complexity of Nash equilibria and Brouwer fixed points.

### Spring 2010 Talk Abstracts

May 5, Chinmoy Dutta (Max Planck Institute for Informatics): Lower Bounds for Noisy Distributed Computation

We consider noisy distributed computation in the setting of wireless sensor networks, where processors are placed randomly on a unit square. They communicate with each other by broadcasting messages in order to compute some function whose input is distributed among them. Constraints on power usage limit the range and the number of transmissions. Furthermore, messages often get corrupted during transmission. Recently, protocols have been proposed to compute efficiently in this model. We show the first lower bounds in this model: in order to compute the PARITY or MAJORITY of N bits reliably, at least N log log N transmissions are necessary. This shows that the currently known protocols for these functions are optimal. The techniques developed in this work can also be used to prove lower bounds for noisy decision trees.

Apr 30, Ravi Sundaram (Northeastern): Reducibility among Fractional Stability Problems

Inspired by the recent "2-NASH is PPAD-complete" breakthrough we show that a number of other problems are hard as well. Here is a list of the problems along with their area of motivation/significance:

1. Fractional Stable Paths Problem - Internet Routing
2. Core of Balanced Games - Economics and Game Theory
3. Scarf's Lemma - Combinatorics
4. Hypergraph Matching - Social Choice and Preference Systems
5. Fractional Bounded Budget Connection Games - Social Networks
6. Strong Fractional Kernel - Graph Theory

We will define these problems, state our results, give a flavor of our reductions and conclude with some open problems.

Joint work with S. Kintali, L. Poplawski, R. Rajaraman and S. Teng.

Apr 26, Homin K. Lee (University of Texas at Austin): On the Learnability of Monotone Functions

A long-standing lacuna in the field of computational learning theory is the learnability of succinctly representable monotone Boolean functions, i.e., functions that preserve the given order of the input. We show that Boolean functions computed by polynomial-size monotone circuits are hard to learn assuming the existence of one-way functions. Having shown the hardness of learning general polynomial-size monotone circuits, we show that the class of Boolean functions computed by polynomial-size depth-3 monotone circuits are hard to learn using statistical queries. As a counterpoint, we give a statistical query learning algorithm that can learn random polynomial-size depth-2 monotone circuits (i.e., monotone DNF formulas).

Apr 23, Shiva Kasiviswanathan (Los Alamos National Laboratory): A Rigorous Approach to Statistical Database Privacy

Privacy is a fundamental problem in modern data analysis. We describe "differential privacy", a mathematically rigorous and comprehensive notion of privacy tailored to data analysis. Differential privacy requires, roughly, that any single individualÕs data have little effect on the outcome of the analysis. Given this definition, it is natural to ask: what computational tasks can be performed while maintaining privacy? In this talk, we focus on the tasks of machine learning and releasing contingency tables.

Learning problems form an important category of computational tasks that generalizes many of the computations applied to large real-life datasets. We examine what concept classes can be learned by an algorithm that preserves differential privacy. Our main result shows that it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the hypothesis class. This is a private analogue of the classical Occam's razor result.

Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical data. We provide tight bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. Our investigation also leads to new results on the spectra of random matrices with correlated rows.

Apr 15, Henry Cohn (Microsoft Research New England): Finding small solutions of equations: from cryptanalysis to error-correcting codes

Many important problems in discrete mathematics and theoretical computer science, such as attacking the RSA cryptosystem or decoding error-correcting codes, can be reduced to finding small solutions of polynomial equations. In this talk, I will survey a general approach that has been independently developed in several different areas, and I will discuss various applications of these techniques, including recent joint work with Nadia Heninger on a number-theoretic analogue of the problem of decoding algebraic-geometric error-correcting codes. No special background in cryptography or coding theory will be assumed.

Apr 14, Elena Grigorescu (MIT): Symmetries in Algebraic Property

In Property Testing the goal is to quickly distinguish objects that satisfy a property from objects that need to be changed in a large fraction of places in order to satisfy the property. In this context, Locally Testable Codes (LTCs) are error-correcting codes for which membership can be decided from reading only a small section of the input. Recent studies of LTCs focus on identifying what attributes of codes allow them to be testable by local inspection. In particular, the symmetries'' of a code seem to play an important role in providing sufficient conditions for testing. Kaufman and Sudan supported this view by showing that the duals of linear codes that have the single local orbit property'' under the affine symmetry group are locally testable. A code has the single local orbit property if it is specified by a single local constraint and its translations under the symmetry group of the code.

In this talk I will present a result motivated by questions arising in previous studies of LTCs regarding error-correcting codes that have the single local orbit property. We show that the dual of every sparse'' binary code whose coordinates are indexed by elements of F_{2^n} for prime n, and whose symmetry group includes the group of non-singular affine transformations of F_{2^n}, has the single local orbit property. (A code is said to be sparse if it contains polynomially many codewords in its block length.) In particular, this class includes the dual-BCH codes for whose duals (i.e., for BCH codes) simple bases were not known. Our result gives the first short (O(n)-bit, as opposed to the natural exp(n)-bit) description of a low-weight basis for BCH codes.

Our techniques involve the use of recent results from additive number theory to prove that the codes we consider, and related codes emerging from our proofs, have high distance. We then combine these with the MacWilliams identities and some careful analysis of the invariance properties to derive our results.

Joint work with Tali Kaufman and Madhu Sudan.

Apr 8, Assaf Naor (NYU): L_1 embeddings of the Heisenberg group and fast estimation of graph isoperimetry

We will show that any L_1-valued mapping of an epsilon net in the unit ball of the Heisenberg group incurs bi-Lipschitz distortion (log(1/epsilon))^c, where c is a universal constant. We will also explain how this result implies an exponential improvement to the best known integrality gap for the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem.

Joint work with Jeff Cheeger and Bruce Kleiner.

Apr 8, Georgios Zervas (Boston University): Information Asymmetries in Pay-Per-Bid Auctions: How Swoopo Makes Bank

Recently, some mainstream e-commerce web sites have begun using "pay-per-bid" auctions to sell items, from video games to bars of gold. In these auctions, bidders incur a cost for placing each bid in addition to (or sometimes in lieu of) the winner's final purchase cost. Thus even when a winner's purchase cost is a small fraction of the item's intrinsic value, the auctioneer can still profit handsomely from the bid fees. Our work provides novel analyses for these auctions, based on both modeling and datasets derived from auctions at Swoopo.com, the leading pay-per-bid auction site. While previous modeling work predicts profit-free equilibria, we analyze the impact of information asymmetry broadly, as well as Swoopo features such as bidpacks and the Swoop It Now option specifically. We find that even small asymmetries across players (cheaper bids, better estimates of other players' intent, different valuations of items, committed players willing to play "chicken") can increase the auction duration significantly and thus skew the auctioneer's profit disproportionately. We discuss our findings in the context of a dataset of thousands of live auctions we observed on Swoopo, which enables us also to examine behavioral factors, such as the power of aggressive bidding. Ultimately, our findings show that even with fully rational players, if players overlook or are unaware any of these factors, the result is outsized profits for pay-per-bid auctioneers.

Feb 1, Alex Samorodnitsky (The Hebrew University of Jerusalem): From local to global, property testing and some related topics

We are given a large mathematical object, such as an output of a computer program purported to compute a 'nice' function of its inputs. We need to decide whether this object indeed has the stipulated properties. However, it is too large to read in its entirety. We are only allowed to read several small (local) fragments. Can we decide, with any degree of certainty, what it looks like globally?

This is a very general question, and the answer depends on many things, such as the structure of the function, the access we are allowed, the degree of certainty we are willing to live with, etc.

We will describe one research strand, originally motivated by program-checking and program verification, which, along the way, became entangled with several other major research threads. One such thread in complexity theory develops methods to verify whether a given proof of a mathematical claim is correct, while looking at only a few small fragments of the proof. Another, important both to complexity and to additive number theory, attempts to decide which multivariate functions over a finite group locally look like low-degree polynomials.

Jan 28, Alexander Gamburd (University of California, Santa Cruz): Expander Graphs: Constructions and Applications

Expanders are highly-connected sparse graphs with wide-ranging applications in computer science and mathematics. After explaining what expanders are and describing their basic properties, I will focus on recent robust constructions which use crucially tools from arithmetic combinatorics, and discuss some of the applications.

Jan 21, Katya Scheinberg (Columbia University): First-order methods with complexity bounds for optimization problems arising in statistical machine learning

Statistical machine learning has recently presented a very interesting collection of challenging optimization problems. Many of these problems belong to well studied classes of convex optimization problems, such as linear and quadratic optimization problems or semidefinite programming problems. It is well known that these classes of problems can be solved (up to \epsilon accuracy) by interior point methods in O(log(1\epsilon) iterations (hence in polynomial time). However, unlike traditional applications of these convex problems, the ML applications are very large scale with completely dense data matrices. This property makes the use of classical second order optimization methods, such as interior point methods, prohibitive. Recently fast first order methods have been suggested by Nesterov and others, which obtain an \epsilon-optimal solution in O(1/\sqrt(\epsilon)) iterations (vs. O(1/\epsilon) complexity of gradient descent). We will discuss extending theoretical guarantees of fast existing first order methods to alternating minimization methods and other first order methods, for which complexity results did not exist before. Alternating minimization methods are particularly suitable for many ML applications, which have the form of \min f(x)+g(x), where f(x) is a convex smooth function and g(x) is convex, possibly nonsmooth, but has a simple form. We will discuss the examples of such problems related methods, complexity results and computation experiment.

Jan 12, Prasad Tetali (Georgia Tech): Markov Chain Mixing with Applications

Sampling from and approximately counting the size of a large set of combinatorial structures has contributed to a renaissance in research in finite Markov chains in the last two decades. Applications are wide-ranging from sophisticated card shuffles, deciphering simple substitution ciphers (of prison inmates in the California state prison), estimating the volume of a high-dimensional convex body, and to understanding the speed of Gibbs sampling heuristics in statistical physics.

After a brief mathematical introduction to the Markov Chain Monte Carlo (MCMC) methodology, the speaker will focus on some recent rigorous results on J.M. Pollard's classical Rho and Kangaroo algorithms (from 1979) for the discrete logarithm problem in finite cyclic groups; a birthday theorem for finite Markov chains is a somewhat surprising by-product.

The lecture will be self-contained to a large extent and will include some open problems.

### Fall 2009 Talk Abstracts

December 15, Gopal Pandurangan (Nanyang Technological University and Brown University): Walking Faster in a Distributed Network

Performing random walks in networks is a fundamental primitive that has found applications in many areas of computer science, including distributed computing. In this talk, we focus on the problem of performing random walks efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds required to obtain a random walk sample.

All previous algorithms that compute a random walk sample of length L as a subroutine always do so naively, i.e., in O(L) rounds. We present a fast distributed algorithm for performing random walks. We show that a random walk sample of length L can be computed in O(L^{1/2}D^{1/2}) rounds on an undirected unweighted network, where D is the diameter of the network. For small diameter graphs, this is a significant improvement over the naive O(L) bound. We show that our algorithm is near-optimal by giving a almost matching lower bound. We also show that our algorithm can be applied to speedup the more general Metropolis-Hastings sampling.

We extend our algorithms to show how k independent walks can be performed in O((kLD)1/2) + K) rounds. Our techniques can be useful in speeding up distributed algorithms for a variety of applications that use random walks as a subroutine. We mention one such application involving the decentralized computation of spectral gap and mixing time, subroutines that can be useful in the design of self-regulating and self-aware networks.

Joint work with Atish Das Sarma, Danupon Nanongkai, and Prasad Tetali (Georgia Tech.).

December 11, Christopher King (Northeastern University, Department of Mathematics): Superadditivity of quantum channel capacity via random matrix methods

Random matrix methods have been recently used to prove the existence of quantum channels whose capacity is superadditive over multiple uses. This phenomenon relies on the use of entangled signal states and has no analog for classical channels. In addition to reviewing the background for these ideas, this talk will explain some of the random matrix methods used in the proofs of existence of such channels, including the recent work of Hastings. Some open questions will also be discussed.

October 2, Brent Heeringa (Williams College): Data Structures for Dynamic Partial-Orders

We define and study the problem of maintaining a dynamic set of elements drawn from a partially ordered universe. For tree-like partial orders we give a linear-sized data structure that supports the operations: insert; delete; test membership; and predecessor. The performance of our data structure is within an O(log w)-factor of optimal. Here w <= n is the width of the partial-order---a natural obstacle in searching a partial order. Although recent results have shown that optimal search trees can be built in linear time for tree-like partial orders, they do not appear to extend easily to the dynamic setting. Thus, our work gives, to the best of our knowledge, the first fully dynamic data structure for this problem.

Our results address a statement of Daskalakis et al. from the full version of their SODA'09 paper where they seek data structures analogous to balanced binary search trees and heaps for partial orders.

Joint work with M. Catalin Iordan (Stanford) and Louis Theran (Temple).

September 17, Alexander Gamburd (UC Santa Cruz): Expanders: from arithmetic to combinatorics and back

Expanders are highly-connected sparse graphs widely used in computer science. The optimal expanders -- Ramanujan graphs -- were constructed using deep results from the theory of automorphic forms. In recent joint work with Bourgain and Sarnak tools from additive combinatorics were used to prove that a wide class of "congruence graphs" are expanders; this expansion property plays crucial role in establishing novel sieving results pertaining to primes in orbits of linear groups.

August 21, Adam Meyerson (UCLA): On the Price of Mediation

This talk explore the relationship between the social cost of correlated equilibrium and Nash equilibrium. In contrast to previous work which models correlated equilibrium as a benevolent mediator, we will define and bound the Price of Mediation: the ratio of the cost of the worst correlated equilibrium to the cost of the worst Nash. In practice, the heuristics used for mediation are frequently non-optimal, and mediators may be inept or self-interested.

The results in this talk focus on symmetric congestion games (also known as load balancing games). For convex cost functions, the Price of Mediation can grow exponentially in the number of players. We provide better bounds for linear, concave, and polynomially-bounded cost functions.

### Spring 2009 Talk Abstracts

May 8, Emanuele Viola: Bits vs. Trits

It's the end of the semester, and you need to store on a computer your N students' grades from the 3-element universe {pass, fail, borderline}. Via so-called arithmetic coding you can store them using the minimum number of bits, N(log_2 3) + O(1), but then to retrieve a grade you need to read all the bits. Or you can use two distinct bits per grade, but then you waste a linear amount of space.

A breakthrough by Patrascu (FOCS 2008), later refined with Thorup, shows how to store the grades using the minimum number of bits so that each grade can be retrieved by reading just O(log N) bits.

We prove a matching lower bound: To retrieve the grades by reading b bits, space N(log_2 3) + N/2^b is necessary.

Apr 10, Lance Fortnow: Some Recent Results on Structural Complexity

We will talk about some recent work on complexity classes.

1. For any constant c, nondeterministic exponential time (NEXP) cannot be computed in polynomial time with n^c queries to SAT and n^c bits of advice. No assumptions are needed for this theorem.
2. A relativized world where NEXP is contained in NP for infinitely many input lengths ?
3. If the polynomial-time hierarchy is infinite, one cannot compress the problem of determining whether a clique of size k exists in a graph of n vertices to solving clique problems of size polynomial in k.
4. If the polynomial-time hierarchy is infinite there are no subexponential-size NP-complete sets (Buhrnan-Hitchcock).

1&2 are from an upcoming ICALP paper by Buhrman, Fortnow and Santhanam.
3 is from a STOC '08 paper by Fortnow and Santhanam answering open questions of Bodlaender-Downey-Fellows-Hermelin and Harnik-Naor.
4 is from a CCC '08 paper by Buhrman and Hitchcock building on 3.

Mar 25, Moses Charikar: New Insights into Semidefinite Programming for Combinatorial Optimization

Beginning with the seminal work of Goemans and Williamson on Max-Cut, semidefinite programming (SDP) has firmly established itself as an important ingredient in the toolkit for designing approximation algorithms for NP-hard problems. Algorithms designed using this approach produce configurations of vectors in high dimensions which are then converted into actual solutions.

In recent years, we have made several strides in understanding the power as well as the limitations of of such SDP approaches. New insights into the geometry of these vector configurations have led to algorithmic breakthroughs for several basic optimization problems. At the same time, a sequence of recent results seems to suggest the tantalizing possibility that, for several optimization problems including Max-Cut, SDP approaches may indeed be the best possible. In this talk, I will present a glimpse of some of this recent excitement around SDP-based methods and explain some of the new insights we have developed about the strengths and weaknesses of this sophisticated tool.

Feb 20, Joshua Brody: Some Applications of Communication Complexity

Communication Complexity has been used to prove lower bounds in a wide variety of domains, from the theoretical (circuit lower bounds) to the practical (streaming algorithms, computer security).

In this talk, we present new results for two communication problems. We begin with a new lower bound for the communication complexity of multiparty pointer jumping. In the second half of the talk, we give several new results for distributed functional monitoring problems.

Part of this talk is joint work with Amit Chakrabarti and Chrisil Arackaparambil.

Jan 30, Arkadev Chattopadhyay: Mutliparty Communication Lower Bounds by the Generalized Discrepancy Method.

Obtaining strong lower bounds for the Number on the Forehead' model of multiparty communication has many applications. For instance, it yields lower bounds on the size of constant-depth circuits and the size of proofs in important proof-systems.

Recently, Shi and Zhu (2008), and Sherstov (2008) independently introduced two closely related techniques for proving lower bounds on 2-party communication complexity of certain functions. We extend both techniques to the multiparty setting. Each of them yields a lower bound of n^{\Omega(1)} for the k-party communication complexity of Disjointness as long as k is a constant. No superlogarithmic lower bounds were known previously on the three-party communication complexity of Disjointness.

Part of this is joint work with Anil Ada.

### Fall 2008 Talk Abstracts

Dec 12, Benjamin Rossman: The Constant-Depth Complexity of k-Clique

I will discuss a recent AC0 lower bound: the k-clique problem on graphs of size n cannot be solved by constant-depth circuits of size O(n^(k/4)). This result has an interesting corollary in logic (finite model theory): the first-order "variable hierarchy" is strict in terms of expressive power on ordered graphs (it was previously open whether 3 variables suffice to define every first-order property of ordered graphs).

Nov 21, Alex Samorodnitsky: Edge-isoperimetry and influences

We will give alternative proofs of two known results on influences of boolean functions, deriving them from a variant of the edge-isoperimetric inequality on the boolean cube. The first result is a theorem of Kahn, Kalai, and Linial stating that every boolean function has an influential variable. The second result is a theorem of Friedgut which states that a function with a small sum of influences essentially depends only a on few of its variables.

Joint work with Dvir Falik.

Oct 24, Emanuele Viola: Polynomials over {0,1}

Polynomials are fundamental objects in computer science that arise in a variety of contexts, such as error-correcting codes and circuit lower bounds. Despite intense research, many basic computational aspects of polynomials remain poorly understood. For example, although it is known that most functions cannot be approximated by low-degree multivariate polynomials, an explicit construction of such a function is still unknown.

In this talk we discuss some of the progress we have made towards understanding computational aspects of polynomials. In particular, we present our recent result that the sum of d pseudorandom generators for degree-1 polynomials is a pseudorandom generator for polynomials of degree d.