# Computer Science Theory at Northeastern

## Theory Seminar

This semester the theory seminar will be every Wednesday 1:00-2:30pm at West Village H 366. If we do not have an invited speaker, we will have presentations from students or faculty. Write to Jonathan Ullman or Lydia Zakynthinou with suggestions for speakers!

## Spring Semester, 2019

 January 301:00PMWVH 366 Hugo Alves Akitaya, Tufts University Recognizing Weak Embeddings Abstract An embedding of a graph $G$ is a drawing $\varphi:G\rightarrow M$ of $G$ on a surface $M$ such that every point each vertex in $V(G)$ is mapped to distinct points and each edge in $E(G)$ to interior-disjoint Jordan arcs between the corresponding vertices. A weak embedding is a map $\varphi:G\rightarrow M$ if, for every $\epsilon>0$, there exist an embedding $\psi_\epsilon:G\rightarrow M$ so that $\|\varphi-\psi_\epsilon\|<\epsilon$, where $\|.\|$ is the uniform norm (i.e., $\sup$ norm). We say that the embedding $\psi_{\varphi}$ approximates $\varphi$. The study of weak embeddings lies at the interface of several independent lines of research in mathematics and computer science. In particular, it generalizes various graph visualization models and is a special case of the notoriously difficult cluster-planarity problem. It is also related to the foldability problem in computational origami. This talk presents several results on the algorithmic complexity of recognizing weak embeddings. February 131:00PMWVH 366 Paul Hand, Northeastern University Inverse Problems under a Learned Generative Prior Abstract Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. In this talk, we will show rigorous recovery guarantees for solving inverse problems under a learned generative prior. First, we will discuss convergence guarantees for compressive sensing under random neural network priors. Then, we will show that generative priors allow for a significant advance to be made in the problem of compressive phase retrieval. To date, no known computationally efficient algorithm exists for solving phase retrieval under a sparsity prior at sample complexity proportional to the signal complexity. With generative priors, we establish a new approach for compressive phase retrieval and establish rigorous guarantees with sample complexity proportional to the signal complexity. February 201:00PMWVH 366 Ram Ramanathan, Chief Scientist, goTenna Decentralized Off-grid Wireless Mesh Networking: Tradeoffs and Protocols Abstract Decentralized infrastructure-less wireless networks can provide instantly-deployable and robust connectivity in the aftermath of natural disasters, in remote areas, and in other contexts such as the Internet-of-Things. After briefly introducing goTenna and multi-hop wireless networking, I will discuss the tradeoffs involved in designing such networks focusing in particular on connectivity properties using insights from percolation theory. I will then consider one particular problem, namely network-wide packet broadcasting, which is closely related to the Connected Dominating Set problem in graphs. I will present a fully distributed zero-control protocol for broadcasting and discuss its correctness and performance. Finally, I will touch upon some research challenges in the area such as incentivizing network growth using cryptocurrencies. February 271:00PMWVH 366 Jarosław Błasiok, Harvard University Optimal Streaming and Tracking Distinct Elements with High Probability Abstract The distinct elements problem is one of the fundamental problems in streaming algorithms --- given a stream of integers in the range $\{1,\ldots,n\}$, we wish to provide a $(1+\varepsilon)$ approximation to the number of distinct elements in the input. After a long line of research optimal solution for this problem with constant probability of success, using $\mathcal{O}(\frac{1}{\varepsilon^2}+\log n)$ bits of space, was given by Kane, Nelson and Woodruff in 2010. The standard approach used in order to achieve low failure probability $\delta$, is to take a median of $\log \delta^{-1}$ parallel repetitions of the original algorithm and report the median of computed answers. We show that such a multiplicative space blow-up is unnecessary: we provide an optimal algorithm using $\mathcal{O}(\frac{\log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space --- matching known lower bounds for this problem. That is, the $\log\delta^{-1}$ factor does not multiply the $\log n$ term. This settles completely the space complexity of the distinct elements problem with respect to all standard parameters. We consider also \emph{strong tracking} (or \emph{continuous monitoring}) variant of the distinct elements problem, where we want an algorithm which provides an approximation of the number of distinct elements seen so far, at all times of the stream. We show that this variant can be solved using $\mathcal{O}(\frac{\log \log n + \log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space, which we show to be optimal. March 271:00PMWVH 366 Christina Ilvento, Harvard University Learning Metrics for Individual Fairness Abstract There has been much discussion recently about how fairness should be measured or enforced in classification. Individual Fairness [Dwork et al. 2012], which requires that similar individuals be treated similarly, is a highly appealing definition because it gives very strong guarantees on treatment of individuals. Unfortunately, the need for a task-specific similarity metric has prevented its use in practice. In this work, we set out a framework for approximating a metric for Individual Fairness based on human judgements. Our model assumes that we have access to a human fairness arbiter, who can answer a limited set of queries concerning similarity of individuals for a particular task, is free of explicit biases and possesses sufficient domain knowledge to evaluate similarity. Our contributions include useful definitions for metric approximation relevant for Individual Fairness, constructions for approximations from a limited number of realistic queries to the arbiter on a fixed sample of individuals, and learning procedures to construct hypotheses for metric approximations which generalize to unseen samples under certain assumptions of learnability of distance threshold functions. April 31:00PMWVH 366 TBD April 101:00PMWVH 366 Oxana Poburinnaya, Boston University TBA April 171:00PMWVH 366 Dimitris Tsipras, MIT TBA April 241:00PMWVH 366 Mitali Bafna, Harvard University TBA