**Usual time and location:** **Wednesday** at **12:00pm EST** unless noted otherwise. The seminar format is hybrid (Zoom/in person). Zoom link here.

**Mailing List:** theory-cs-seminar@lists.purdue.edu (join here)

**Fall 2022:**

**Dec 7 [12PM EST]: Wesley Pegden** (CMU).

**Nov 30 [12PM EST, Zoom]:** **Gabriel Istrate** (University of Bucharest).** Title:** Mechanism Design With Predictions for Obnoxious Facility Location

**Abstract:** We study *mechanism design with predictions* for the

obnoxious facility location problem. We present deterministic

strategyproof mechanisms that display a common tradeoff between

robustness and consistency.

The common tradeoff is valid for obnoxious facility location on

segments, squares, circles and trees. All these mechanisms are

actually group strategyproof, with the exception of the case of

squares, where manipulations from coalitions of two agents exist. We

prove that the uncovered tradeoff is optimal in the 1-dimensional

case.

We also deal with the case of agents with dual preferences, showing

that the same tradeoff applies. In this case the condition of

strategyproofness has to be relaxed, though.

We will discuss further issues and open problems raised by our work.

**Nov 23 [12PM EST]:** No seminar (Thanksgiving break).

**Nov 16 [12PM EST]:** **Maryam Aliakbarpour** (Boston University and Northeastern University). **Title:** Estimation of Entropy in Constant Space.

**Abstract:** Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution D over an alphabet of size k up to \epsilon additive error by streaming over (k/\epsilon^3)⋅polylog(1/\epsilon) i.i.d. samples and using only O(1) words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to (k/\epsilon^2)⋅polylog(1/\epsilon). We conjecture that this is optimal up to polylog(1/\epsilon) factors.

Joint work with Andrew McGregor, Jelani Nelson, Erik Waingarten.

This talk is based on the following paper: https://arxiv.org/abs/2205.09804.

**Nov 9 [12PM EST]:** **Aadirupa Saha** (TTIC). **Title:** Dueling-Opt: Convex Optimization with Relative Feedback.

**Abstract:** In this talk, we will discuss an unconventional version of the standard bandit convex optimization (BCO) problem with preference (dueling) feedback—Like the traditional optimization objective, the goal is to find the minimizer of a convex function with the least possible query complexity, however, without the luxury of even zeroth-order feedback; here instead, at each round, the learner can only observe a single noisy 0/1 win-loss feedback for a pair of queried points (based on their underlying function values). The problem is certainly of great practical relevance in any real-world system with potentially large (or even infinite) decision spaces.

The main difficulty in this framework is that any `gradient-descent’ based technique is bound to fail as it is impossible to estimate gradient information at any point from a single-bit comparison preference feedback which does not reveal any magnitude information of the underlying cost function at the queried points. We overcome this challenge by designing normalized gradient descent-based algorithms and showed near-optimal convergence rates for smooth and strongly convex functions. Towards the end, we will consider this problem for a very general class of preference functions which includes all monotonic functions that can be approximated by finite degree polynomials. We will conclude the talk with a plethora of open questions led by this general direction of optimization with `unconventional feedback’ which can help bridging the gaps between theory and practice in many real world applications.

Based on joint works with Tomer Koren (Tel Aviv Univ and Google), Yishay Mansour (Tel Aviv Univ and Google).

**Nov 2 [12PM EST]:** **Raphael Meyer** (New York University).

**Oct 26 [12PM EST, LWSN 3162]:** **Hemanta Maji **(Purdue). **Title:** Round and Communication Complexity of Secure Computation.

**Abstract:** Reducing the overhead of security solutions increases their adoption. Motivated by such efficiency considerations, characterizing secure computation’s round and communication complexity is natural. The seminal results of Chor-Kushilevitz-Beaver (STOC-1989, FOCS-1989, DIMACS-1989) determine these complexities for deterministic computations. This talk presents our recent work that proves the first such results for the randomized output case. We study the problem of realizing a two-party secure function evaluation in a given round or communication constraint. We prove the decidability of this problem. We construct one such protocol if it exists; otherwise, we present an obstruction to achieving security. Our technical innovation is a geometric framework for this research — opening a wormhole connecting it to geometry research. This framework encodes all candidate secure protocols as points. Studying mathematical properties of novel generalizations of their convex hull imply round and communication complexity results. In this talk, I will also discuss connections to problems in communication and cryptographic complexity.

** Oct 19 [12PM EST]: Divya Mohan** (Tel-Aviv University).

**Title:**Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions.

**Abstract: **Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown. The seller is now faced with a multi-parameter mechanism design problem and has to select a set of ads that fit within a given total space. Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical.

In our work, we adopt a “Myersonian” approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main result is a new, simple, greedy and monotone allocation rule that guarantees a 3-approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called Generalized Second Price (GSP) payment rule. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded.

Joint work with Gagan Aggarwal, Kshipra Bhawalkar, Aranyak Mehta, Alex Psomas.

**Oct 17 [12PM EST]:** **Jessica Sorrell** (UCSD). **Title:** Reproducibility in Learning.

**Abstract:** Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as p-hacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.

In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning. We show that any statistical query algorithm can be made reproducible with a modest increase in sample complexity, and we use this to construct reproducible algorithms for finding approximate heavy-hitters and medians. Using these ideas, we give the first reproducible algorithm for learning halfspaces via a reproducible weak learner and a reproducible boosting algorithm. We initiate the study of lower bounds and inherent tradeoffs for reproducible algorithms, giving nearly tight sample complexity upper and lower bounds for reproducible versus non-reproducible SQ algorithms. Finally, we discuss connections to other well-studied notions of algorithmic stability, such as differential privacy.

Joint work with Russell Impagliazzo (UCSD), Rex Lei (UCSD), and Toniann Pitassi (Columbia University), in STOC 2022.

**Oct 12 ** **[12PM EST]: **No seminar (October break).

**Oct 5 ** **[12PM EST]: Panayotis Mertikopoulos** (CNRS). **Title: **On the limits – and limitations – of online learning in games.

**Abstract: **Does learning from empirical observations converge to a Nash equilibrium in a game-theoretic setting? If yes, at what rate? And, if not, what are the possible limiting behaviors that emerge otherwise?

A series of well-known impossibility results preclude the possibility of a “universally positive” answer to the first question – i.e., the existence a dynamical process which, based on player-specific information alone, converges to Nash equilibrium in all games.

In view of this, we will instead examine the convergence properties of a class of widely used online learning algorithms which includes the exponential weights algorithm, its bandit/optimistic variants, and many others. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is stable and attracting if and only if it is *strict* (i.e., each equilibrium strategy has a unique best response). We will also discuss the robustness of this equivalence to different feedback models – from full information to bandit, payoff-based feedback – as well as the methods’ rate of convergence, and what other limiting behaviors may arise in the long run.

**Sep 28 [12PM EST]: ****Brian Bullins** (Purdue University). **Title:** Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities

Abstract: In this talk, we consider new simple and optimal algorithms for solving the variational inequality (VI) problem for pth-order smooth, monotone operators — a problem that generalizes convex optimization and saddle-point problems. Recent works have presented methods that achieve a rate of \tilde{O}(eps^{-2/(p+1)}) for p >= 1. A drawback to these approaches, however, is their reliance on a line search scheme. We provide the first pth-order method that achieves a rate of O(eps^{-2/(p+1)}), and our method does not rely on a line search routine, thereby improving upon previous rates by a logarithmic factor. Building on the Mirror Prox method of Nemirovski (2004), our algorithm works even in the constrained, non-Euclidean setting. Furthermore, we prove the optimality of our algorithm by constructing matching lower bounds. This establishes a separation between solving smooth MVIs and smooth convex optimization, in addition to settling the oracle complexity of solving pth-order smooth MVIs.

Based on joint work with Deeksha Adil, Arun Jambulapati, and Sushant Sachdeva.

**Sep 21 [12PM EST]:**

**Talk 1: Elena Grigorescu **(Purdue University). **Title**: Hardness of Maximum Likelihood Learning of DPPs.

**Abstract**: Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively correlated sets. DPPs have been successfully employed in Machine Learning applications to select a diverse, yet representative subset of data. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete. In this work we prove Kulesza’s conjecture. In fact we prove a slightly more general result on the hardness of approximation of the problem. We also give a simple efficient approximation algorithm. The talk will be self-contained.

Joint work with Brendan Juba, Karl Wimmer, Ning Xie (COLT 2022)

**Talk 2: Simina Br ânzei** (Purdue University).

**Title**: The Query Complexity of Local Search and Brouwer in Rounds.

**Abstract**: We consider the query complexity of finding a local minimum of a function defined on a graph, where at most k rounds of interaction (aka adaptivity) with the oracle are allowed. Adaptivity is a fundamental concept studied due to the need to parallelize computation and understand the speedups attainable. The query complexity of local search is tightly related to the complexity of computing stationary points of a function, thus bounds for local search can give insights into the performance of algorithms such as gradient descent. We focus on the d-dimensional grid {1,…,n}^d, where the dimension d≥2 is a constant. Our main contribution is to give algorithms and lower bounds that characterize the trade-off between the number of rounds of adaptivity and the query complexity of local search, when the number of rounds is constant and polynomial in n, respectively. The local search analysis also enables us to characterize the query complexity of computing a Brouwer fixed point in rounds.

Joint work with Jiawei Li (COLT 2022).

**Sep 7 [12PM EST]:** **Rameesh Paul** (IISc Bangalore). **Title**: Recovery Algorithms for Planted Semi-Random Graph and Hypergraph Problems.

**Abstract**: Many NP-hard problems with abysmal worst-case guarantees turn out to be “easier” on real-life instances. This motivates studying more realistic models of instances for such problems. Studying *planted semi-random* models is one such systematic approach along these lines.

An instance of a *planted solution* is constructed by starting with a set of vertices V and imposing a subgraph of interest on a subset S ⊆ V where |S|=k. The edges in S x (V\S) and (V\S) x (V\S) are added with probability p. For d=0 this reduces to the famous planted clique problem, and we don’t expect recovery algorithms for k=o(√n). In the *planted semi-random model*, we further allow an adversary (with moderated ability) to interact with the graph that makes it challenging to recover even for k=o(n) regimes.

In this talk, we look at the *planted semi-random* model for the Odd Cycle Transversal problem and the problem of finding the largest independent set in r-uniform hypergraph. We give polynomial time recovery algorithms for these problems in the *planted semi-random model* for the regimes of k=o(n)

Based on joint works with Yash Khanna, Akash Kumar and Anand Louis.

**Spring 2022:**

**Jan 19 [3PM EST]:** **Vishnu V. Narayan** (McGill University). * Title:* Fair Division with Payments.

* Abstract:* We study the fair division of a collection of items among a set of agents. The fair division problem has received considerable attention over several decades, and envy-freeness, where each agent (weakly) prefers its own bundle of items to the bundle allocated to any other agent, has emerged as the dominant fairness criterion in economics. However, when the items are indivisible, envy-free allocations are not guaranteed to exist. A recent line of research shows that envy-freeness can be recovered through the use of payments amongst the agents. The main goal of research in the area is to bound the total payments sufficient to achieve an envy-free allocation (and, optionally, achieve high welfare). We discuss some of our recent work in the area and some open problems. This talk is based on joint works with Johannes Brustle, Jack Dippel, Mashbat Suzuki and Adrian Vetta.

**February 2 [3PM EST]: Yuval Rabani** (Hebrew University of Jerusalem). * Title:* Identifying functional structure in data.

* Abstract:* We discuss algorithms for identifying discrete mixture models from a collection of samples. The main result is an algorithm for learning mixture of Bayesian network distributions. The model to identify reflects causal relations among the observable variables, all of which also depend on one confounding variable that determines the mixture constituent. Along the way, we will also discuss simpler models: learning mixtures of product distributions, and learning mixtures of binomial distributions. We will mention some other applications, including learning topic models and learning population histories. Based on recent papers with Spencer Gordon, Bijan Mazaheri, and Leonard Schulman, and also on older work with Jian Li, Leonard Schulman, and Chaitanya Swamy.

**February 9 [10:30AM EST, crosslisted with CS colloqium]: Tal Wagner** (Microsoft Research). * Title: *On the Role of Data in Algorithm Design.

* Abstract:* Recently, there has been a growing interest in harnessing the power of big datasets and modern machine learning for designing new scalable algorithms. This invites us to rethink the role of data in algorithm design: not just as the input to pre-designed algorithms, but also a factor that enters the algorithm design process itself, driving it in a strong and possibly automated manner. This talk will show how to leverage data and learning for better algorithm design in two fundamental areas: high-dimensional similarity search and efficient linear algebra.

In particular, I will show the following:

1. Using data-dependent compression, we obtain optimal compressed representations of high-dimensional Euclidean distances. This result marks the first improvement over classical data-oblivious compression schemes, which provably cannot match its performance.

2. Using neural networks, we show how to learn to construct better data structures for high-dimensional similarity search.

3. We then show how those techniques also give rise to fast algorithms for low rank approximation of matrices.

Our algorithms are both proven analytically and implemented and validated empirically, showing improvements over previous methods on standard benchmark datasets.

**February 15 [2PM EST]:** **Saugata Basu** (Purdue University). * Title: *Why do I like the symmetric group so much ?

* Abstract:* There are one thousand and one reasons to like the symmetric group. In the talk I will explain one of them. I will talk about some recent work on studying the homology groups of real algebraic varieties (some symmetric, some not) and its connection with a homological stability conjecture appearing in a paper with Cordian Riener. Some algorithmic consequences will also be mentioned. Joint work (separately) with Daniel Perrucci and Cordian Riener.

**February 23 [3PM EST]: Paul Valiant **(Purdue University). * Title: *Mean estimation in low and high dimensions

* Abstract:* This talk will discuss the fundamental statistical problem of estimating the mean of a distribution, as accurately as possible given samples from it. This problem arises both as a subcomponent of many algorithms, and also in practice as one of the most important data primitives when dealing with real-world data. While many variants and extensions of this problem have been proposed and analyzed, in this talk I will discuss two of the most iconic: 1) when the data comes from a real-valued distribution, and 2) when the data comes from a high-dimensional vector-valued distribution. In both cases, we achieve the first estimators whose accuracy is optimal to 1+o(1) factors, optimal in its dependence on the unknown (co-) variance of the underlying distribution, the number of samples n, and the desired confidence delta. I will highlight some of the crucial and novel analytical tools used in the analysis, and in particular, draw attention to a new “vector Bernstein inequality” which makes precise the intuition that sums of bounded independent random variables in increasingly high dimensions increasingly “adhere to a spherical shell”. These results suggest several possible extensions in this large and active area of statistical estimation research. Based on joint work with Jasper Lee in FOCS’21 and ITCS’22.

**March 2 [3PM EST]:** **Ioannis Panageas** (UC Irvine). * Title:* Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

* Abstract:* Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? What are the similarities and differences between multi-agent coordination with and without state dependence? We present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games. In the opposite direction, Markov games where every state-game is a potential game are not necessarily MPGs. Nevertheless, MPGs showcase standard desirable properties such as the existence of deterministic Nash policies. In our main technical result, we prove fast convergence of independent policy gradient (and its stochastic variant) to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.

**March 9 [3PM EST]:** **Rahul Santhanam** (University of Oxford).

**March 16:** *spring break.*

**March 23 [3PM EST]:** **Steve Hanneke** (Purdue University).

**March 30 [3PM EST]:** **Kasper Green Larsen** (Aarhus University).

**April 6 [3PM EST]:** **Inbal Talgam-Cohen** (Technion).

**April 13 [3PM EST]: ****Gesualdo Scutari **(Purdue University).

**April 20 [3PM EST]: ****Manolis Vlatakis** (Columbia University). * Title:* Building Optimization beyond Minimization: A Journey in Game Dynamics.

* Abstract:* Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, a surge of different studies have been focused on the core problem of understanding the behavior of game dynamics in general N-player games. From the seminal settings of two competitive players and Min-Max Optimization to the complete understanding of how the day-to-day behavior of the dynamics correlates to the game’s different notion of equilibria is much more limited, and only partial results are known for certain classes of games (such as zero-sum or congestion games). In this talk, we study from two different perspectives arguably the most well-studied class of no-regret dynamics,

“Follow-the-regularized-leader” (FTRL) and Discretizations of Gradient Flow (GDA/OGDA/EG), and we establish a sweeping negative result showing that the notion of mixed Nash equilibrium is antithetical to no-regret learning. Specifically, we show that any Nash equilibrium which is not strict (in that every player has a unique best response) cannot be stable and attracting under the dynamics of FTGL. This result has significant implications for predicting the outcome of a learning process as it shows unequivocally that only strict (and hence, pure) Nash equilibria can emerge as stable limit points thereof. For a final happy end story, we present either structural examples of families where convergence is possible providing the last-iterate convergence rates or even new methods inspired from other areas like control theory and planning.

**April 27 [3PM EST]: ****Themis Gouleakis** (National University of Singapore).

## Fall 2021 Schedule:

**Sep 3 [10AM EST]: ** **Brian Bullins** (Toyota Technological Institute at Chicago). * Title: A Stochastic Newton Algorithm for Distributed Convex Optimization*.

*Abstract:* We propose and analyze a stochastic Newton algorithm for distributed convex optimization. At the heart of our approach is recent work showing that quadratic objectives can be optimized to high accuracy using a parallel algorithm with only a single round of communication. Our algorithm expresses the Newton update as the solution to a quadratic problem which we optimize using stochastic gradients and stochastic Hessian-vector products for the objective, both of which can typically be computed efficiently. We analyze our method for quasi-self-concordant objectives (e.g., logistic regression), and demonstrate that it can in some instances achieve faster convergence rates than comparable first-order methods while requiring less communication and a similar amount of computation.

**Sep 10 (3PM EST, joint with Michigan):** **Karthik C. S.** (Rutgers University). * Title:* Reversing Color Coding

*Abstract:*In computational complexity it is often easier to prove hardness results for a colored version of a combinatorial or graph theoretic problem than its uncolored counterpart. Moreover, one can typically reduce from the uncolored version of a problem to its colored counterpart by a straightforward application of the celebrated color coding technique. Is the reduction in the reverse direction also possible?In some interesting cases, such as the parameterized set intersection problem, such a reduction is highly non-trivial and this shall be the focus of this talk.

Joint work with Boris Bukh and Bhargav Narayanan.

**Sep 17** **(3PM EST):** **Ellen Vitercik** (UC Berkeley). ** Title:** How much data is sufficient to learn high-performing algorithms?

*Abstract:* Algorithms often have tunable parameters that have a considerable impact on their runtime and solution quality. A growing body of research has demonstrated that *data-driven algorithm design* can lead to significant gains in runtime and solution quality. Data-driven algorithm design uses a training set of problem instances sampled from an unknown, application-specific distribution and returns a parameter setting with strong average performance on the training set. We provide a broadly applicable theory for deriving generalization guarantees for data-driven algorithm design, which bound the difference between the algorithm’s expected performance and its average performance over the training set. The challenge is that for many combinatorial algorithms, performance is a volatile function of the parameters: slightly perturbing the parameters can cause a cascade of changes in the algorithm’s behavior. Prior research has proved generalization bounds by employing case-by-case analyses of parameterized greedy algorithms, clustering algorithms, integer programming algorithms, and selling mechanisms. We formalize a unifying structure which we use to prove very general guarantees, yet we recover the bounds from prior research. Our guarantees apply whenever an algorithm’s performance is a piecewise-constant, -linear, or—more generally—piecewise-structured function of its parameters. As we demonstrate, our theory also implies novel bounds for dynamic programming algorithms used in computational biology and voting mechanisms from economics. This talk is based on joint work with Nina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, and Tuomas Sandholm.

**Sep 24 (3PM EST, joint with Michigan):**

**Ray Li**(Stanford University).

*The zero-rate threshold of adversarial bit-deletions is less than 1/2.*

**Title:***Abstract:* Error correcting codes protect data from noise. Deletion errors are pervasive, yet codes correcting deletions are poorly understood. I will discuss recent work that answers a basic deletion codes question: Can binary codes of non-vanishing rate correct a worst-case deletion rate approaching 1/2? (1/2 is a natural limit because of a trivial argument) We show that the answer is no: Any binary code correcting a worst-case deletion rate of 0.49…9 must have rate asymptotically approaching zero. This is also a combinatorial result about longest common subsequences: We show that any set with at least 2^{polylog N} length N binary strings must contain two strings c and c’ whose longest common subsequence has length at least 0.50..01N. I also discuss our techniques, which include string regularity arguments and a structural lemma that classifies binary strings by their oscillation patterns. Based on joint work with Venkatesan Guruswami and Xiaoyu He.

**Oct 1 [11AM EST]:** **Santhoshini Velusamy** (Harvard University). * Title:* Approximability of CSPs with linear sketches.

*Abstract:* The class of constraint satisfaction problems (CSPs) includes but is not limited to optimization problems such as Max-CUT, Max-DICUT, Max-k-SAT, Max-*q*-Coloring, and Unique Games. In this talk, I will describe our recent results giving new upper and lower bounds on the approximability of every CSP in the single-pass streaming setting. In particular, for sketching algorithms, we prove the following dichotomy theorem: Any CSP is either solvable in polylogarithmic space or requires at least sqrt(n) space. We also prove stronger streaming lower bounds for some CSPs, significantly generalizing the previous works on Max-CUT. I will conclude with some interesting open problems in this direction. No background in streaming, sketching or constraint satisfaction problems will be needed to enjoy this talk. Based on joint works with Chi-Ning Chou, Alexander Golovnev, Noah Singer, Madhu Sudan, and Ameya Velingker.

**Oct 8 (3PM EST, joint with Michigan) :**

**Alexandros Hollender**(University of Oxford).

*The Complexity of Gradient Descent: CLS = PPAD ∩ PLS.*

**Title:***Abstract:* We study search problems that can be solved by performing Gradient Descent on a bounded convex polytopal domain and show that this class is equal to the intersection of two well-known classes: PPAD and PLS. As our main underlying technical contribution, we show that computing a Karush-Kuhn-Tucker (KKT) point of a continuously differentiable function over the domain [0,1]2 is PPAD ∩ PLS-complete. This is the first natural problem to be shown complete for this class. Our results also imply that the class CLS (Continuous Local Search) – which was defined by Daskalakis and Papadimitriou as a more “natural” counterpart to PPAD ∩ PLS and contains many interesting problems – is itself equal to PPAD ∩ PLS.

**Oct 22 (3PM EST, joint with Michigan):**

**Aris Filos-Ratsikas**(University of Liverpool).

*On the Complexity of Consensus-Halving and Necklace Splitting.*

**Title:***Abstract:* The Necklace Splitting problem and its continuous counterpart, the Consensus-Halving problem, are classical problems in combinatorics, with applications to fair division and social choice theory. A distinctive characteristic of these problems is that they are total, i.e., they always have a solution, but it was not known until recently whether such a solution can be found efficiently. In this talk, I will present results from a series of recent papers (from 2018 to 2021) related to the computational complexity of these problems. The talk will also provide a gentle introduction to the computational subclasses of the class TFNP, the class of total search problems for which a solution can be verified in polynomial time.

**Oct 29 (11AM EST):** **Divyarthi Mohan** (Tel-Aviv University). **Title:**** **Simplicity and Optimality in Multi-Item Auctions.

*Abstract:* Designing mechanisms to maximize revenue is a fundamental problem in mathematical economics and has various applications like online ad auctions and spectrum auctions. Unfortunately, optimal auctions for selling multiple items can be unreasonably complex and computationally intractable. In this talk, we consider a revenue-maximizing seller with n items facing a single unit-demand buyer. Our work shows that simple mechanisms can achieve almost optimal revenue. We approached the tradeoffs of simplicity formally through the lens of computation and menu size. Our main result provides a mechanism that gets a (1 − ε)-approximation to the optimal revenue in time quasi-polynomial in n and has quasi polynomial (symmetric) menu complexity. Joint work with Pravesh Kothari, Ariel Schvartzman, Sahil Singla, and Matt Weinberg.

**Nov 8 [10AM EST]:** **Georgios Piliouras** (Singapore University of Technology and Design). * Title:* Towards a Unified Theory of Learning in Games: Chaos, Anarchy, Regret & Equilibration.

*Abstract:* We examine some classic questions in game theory and online learning. How do standard regret-minimizing dynamics such as multiplicative weights update, online gradient descent, and follow-the-regularized-leader behave when applied in games? The standard textbook answer to this question is that these dynamics converge in a time-average sense to weak notions of equilibria. We discuss weaknesses of such results in practice and turn our focus towards understanding the day-to-day behavior instead. In the seminal class of zero-sum games, continuous-time dynamics “cycle” around the equilibrium, whereas discrete time-algorithms can behave in all possible ways (equilibration, recurrence/cycles, or unstable chaos) based only on minor changes. In the standard class of potential/congestion games, such dynamics experience phase transitions from stability to chaos as the total system demand increases. Such instabilities lead to high social inefficiency even in games where the Price of Anarchy is equal to one (i.e. where all equilibria are optimal). Our analysis is derived by a combination of distinct tools: optimization theory (regret analysis), dynamical systems (chaos theory) and game theory (equilibria, Price of Anarchy).

**Nov 12 (3PM EST, joint with Michigan):**

**Mark Braverman**(Princeton University).

*Optimization-friendly generic mechanisms without money.*

**Title:***Abstract:* The goal of this paper is to develop a generic framework for converting modern optimization algorithms into mechanisms where inputs come from self-interested agents. We focus on aggregating preferences from n players in a context without money. Special cases of this setting include voting, allocation of items by lottery, and matching. Our key technical contribution is a new meta-algorithm we call \apex (Adaptive Pricing Equalizing Externalities). The framework is sufficiently general to be combined with any optimization algorithm that is based on local search. We outline an agenda for studying the algorithm’s properties and its applications. As a special case of applying the framework to the problem of one-sided assignment with lotteries, we obtain a strengthening of the 1979 result by Hylland and Zeckhauser on allocation via a competitive equilibrium from equal incomes (CEEI). The [HZ79] result posits that there is a (fractional) allocation and a set of item prices such that the allocation is a competitive equilibrium given prices. We further show that there is always a reweighing of the players’ utility values such that running unit-demand VCG with reweighed utilities leads to a HZ-equilibrium prices. Interestingly, not all HZ competitive equilibria come from VCG prices. As part of our proof, we re-prove the [HZ79] result using only Brouwer’s fixed point theorem (and not the more general Kakutani’s theorem). This may be of independent interest.

**Nov 19 (11:30AM EST):** **Mahsa Eftekhari** (UC Davis). * Title:* A time and space optimal stable population protocol solving exact majority.

*Abstract:* Population protocols are a model of distributed computing appropriate for modeling well-mixed chemical reaction networks and other physical systems where agents exchange information in pairwise interactions but have no control over their schedule of interaction partners. We study the *majority* problem: determining in an initial population of n agents, each with one of two opinions A or B, whether there are more A, more B, or a tie. A *stable* protocol solves this problem with probability 1 by eventually entering a configuration in which all agents agree on a correct consensus decision of A, B, or T, from which the consensus cannot change. We describe a protocol that solves this problem using O(log n) states (log log n + O(1) bits of memory) and optimal expected time O(log n).

**Nov 22 (3:30PM EST):** ** Gesualdo Scutari** (Purdue). In person (Lawson 3102). * Title:* Bringing Statistical Thinking in Distributed Optimization. Vignettes from statistical inference over Networks.

*Abstract:* There is growing interest in solving large-scale statistical machine learning problems over decentralized networks, where data are distributed across the nodes of the network and no centralized coordination is present (we termed these systems “mesh” networks). Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources, such as time and communication efforts. While statistical-computation tradeoffs have been largely explored in the centralized setting, our understanding over mesh networks is limited: (i) distributed schemes, designed and performing well in the classical low-dimensional regime, can break down in the high-dimensional case; and (ii) existing convergence studies may fail to predict algorithmic behaviors; some are in fact confuted by experiments. This is mainly due to the fact that the majority of distributed algorithms have been designed and studied only from the optimization perspective, lacking the statistical dimension. This talk will discuss some vignettes from high-dimensional statistical inference suggesting new analyses aiming at bringing statistical thinking in distributed optimization.

**Dec 10 (3PM EST, joint with Michigan):**

**Suprovat Ghoshal**(University of Michigan).

*A Characterization of Approximability for Biased CSPs.*

**Title:***Abstract:* A $\mu$-biased Max-CSP instance with predicate $\psi:\{0,1\}^r \to \{0,1\}$ is an instance of Constraint Satisfaction Problem (CSP) where the objective is to find a labeling of relative weight at most $\mu$ which satisfies the maximum fraction of constraints. Biased CSPs are versatile and express several well-studied problems such as Densest-$k$-Sub(Hyper)graph and SmallSetExpansion.

In this work, we explore the role played by the bias parameter $\mu$ on the approximability of biased CSPs. We show that the approximability of such CSPs can be characterized (up to loss of factors of arity $r$) using the bias-approximation curve of Densest-$k$-SubHypergraph (DkSH). In particular, this gives a tight characterization of predicates that admit approximation guarantees that are independent of the bias parameter $\mu$.

Motivated by the above, we give new approximation and hardness results for DkSH. In particular, assuming the Small Set Expansion Hypothesis (SSEH), we show that DkSH with arity $r$ and $k = \mu n$ is \NP-hard to approximate to a factor of $\Omega(r^3\mu^{r-1}\log(1/\mu))$ for every $r \geq 2$ and $\mu < 2^{-r}$. We also give a $O(\mu^{r-1}\log(1/\mu))$-approximation algorithm for the same setting. Our upper and lower bounds are tight up to constant factors, when the arity $r$ is a constant, and in particular, imply the first tight approximation bounds for the Densest-$k$-Subgraph problem in the linear bias regime. Furthermore, using the above characterization, our results also imply matching algorithms and hardness for every biased CSP of constant arity.

## Spring 2020 Schedule:

**February 5:** **Shubhang Kulkarni** (Purdue). *Algorithmic Approximation Techniques for Directed Orienteering and Directed Steiner Trees*. Abstract below.

**February 12:** **Minshen Zhu** (Purdue). *Detecting Multilinear Terms in Polynomials and Applications to Finding Connected Subgraphs.* Abstract below.

**February 19:** *No speaker.*

**February 26:** **Kent Quanrud** (Purdue). * A Linear-time Algorithm for Finding a Sparse k-connected Spanning Subgraph of a k-connected Graph.* Abstract below.

**March 4:** **Tamalika Mukherjee** (Purdue). *Fine-grained complexity of SVP. *Abstract below.

**March 11:** *No speaker (Spring Break)*

**April 8:** **Shai Ben-David**(University of Waterloo).

**April 17 @ 1pm EST:** **Nima Anari** (Stanford University).

## Fall 2019 Schedule:

**August 29: Arturs Backurs** (Toyota Technological Institute at Chicago). *Efficient Density Evaluation for Smooth Kernels*. Abstract below.

**September 3**: **Tamal Dey** (Ohio State University). *Certified Algorithms in Topological Shape and Data Analysis*. Abstract below.

**September 10: Akash Kumar** (Purdue). *Max Cut, SDPs and Sum-of-Squares. * Abstract below.

**September 18 (Wednesday): Minshen Zhu **(Purdue). *High-Dimensional Expanders and Applications.* Abstract below.

**September 25 (Wednesday): Shai Vardi** (Purdue). *How to Hire Secretaries with Stochastic Departures*. 1:00PM at Room 3102. Abstract below.

**October 4 (Friday): Kent Quanrud. ***Graph partitioning using single commodity flows* (by Khandekar, Rao, Vazirani). 2:00 PM, LWSN 3102.

**October 9** **(Wednesday)**: **Steve Hanneke** (Toyota Technological Institute at Chicago)

**October 17:** **Alex Block**. *(Zero-Knowledge) Interactive Proofs (Survey Talk)*. 1:30PM to 2:30 PM in LWSN 3102. Abstract below.

**October 24:** **Tamalika Mukherjee**. *Lattice Reduction for Modules, or How to Reduce ModuleSVP to ModuleSVP*. 1:30 PM to 2:30 PM in LWSN 3102. Abstract below.

**October 31:** **Young-San Lin**. *Approximate Welfare Maximization in Combinatorial Auctions (Survey Talk).* 1:30 PM to 2:30 PM in LWSN 3102. Abstract below.

**November 7:** **Adrian Vladu** (Boston University). *Improved Convergence for L∞ and L1 Regression via Iteratively Reweighted Least Squares.* 1:30 PM to 2:30 PM in LWSN 3102. Abstract below.

**November 14: Visu Makam **(Institute for Advanced Study). *Barriers for lifting techniques for tensor rank lower bounds*. 1:30 to 2:30 PM in LWSN 3102. Abstract below.

* Visu is also speaking in the Algebraic Geometry Seminar (in the math department) on November 13 on the topic “Singular tuples of matrices is not a null cone”.

**November 20: Lance Fortnow** (Illinois Institute of Technology). * Distinguished Theory Seminar*, 11:30 AM to 12:30 PM in LWSN 1142.

**December 5:** **Jug****al Garg** (UIUC). TBD

1:30 PM to 2:30 PM in LWSN 3102.

# Abstracts:

**March 4: Tamalika Mukherjee**. *Fine-grained complexity of SVP*. 1:00pm to 2:00pm in HAAS 175.

Abstract: The Shortest Vector Problem (SVP) in lattices has been a long-standing problem of interest in the mathematics community and more recently has gained a lot of interest in the computer science community due to its role in lattice-based cryptographic constructions. Loosely speaking, the security of many of these cryptographic primitives is based on the worst-case hardness of SVP, and these primitives are likely to be used on a mass scale in the near future.

In this talk we will survey some initial hardness results of SVP and then mainly focus on the fine-grained hardness results which were recently shown in a STOC 2018 paper titled “(Gap/S)ETH Hardness of SVP”, by Divesh Aggarwal and Noah Stephens-Davidowitz.

**February 26:** **Kent Quanrud**. *A Linear-time Algorithm for Finding a Sparse k-connected Spanning Subgraph of a k-connected Graph*. 1:00pm to 2:00pm in HAAS 175.

Abstract: In this talk, Kent presents the algorithm introduced by this work of Nagamochi and Ibaraki (Algorithmica 1992). The original abstract is below.

We show that any k-connected graph G = (V, E) has a sparse k-connected spanning subgraph G’ = (V, E’) with |E’| = O(k |V|) by presenting an O(|E|)-time algorithm to find one such subgraph, where connectivity stands for either edge-connectivity or node connectivity. By using this algorithm as preprocessing, the time complexities of some graph problems related to connectivity can be improved. For example, the current best time bound O( max{ k^2 |V|^(1/2), k |V| } |E| ) to determine whether node-connectivity \kappa(G) of a graph G = (V, E) is larger than a given integer k or not can be reduced to O( max{ k^3 |V|^(3/2), k^2 |V|^2 } ).

**February 12: Minshen Zhu**. *Detecting Mulilinear Terms in Polynomials and Applications to Finding Connected Subgraphs*. 1:00pm to 2:00pm in HAAS 175.

Abstract: Suppose we have a polynomial P(x_1, x_2, …, x_n) of total degree k (over some finite field) which is implicitly defined by an arithmetic circuit of size poly(n). We are interested in the following multilinear-term detecting problem: in the sum-of-product expansion of P is there a monomial involving at most one copy of each variable? Such a monomial is named a multilinear term. In this talk I will introduce a 2^k*poly(n)-time randomized algorithm for this problem, and show that such an algorithm implies fixed-parameter tractable (FPT) algorithms for k-Path and k-Tree, and more generally problems of finding connected subgraphs of size k in a given graph.

**February 5:** **Shubhang Kulkarni**. *Algorithmic Approximation Techniques for Directed Orienteering and Directed Steiner Trees*. 1:00pm to 2:00pm in HAAS 175.

Abstract: Several important combinatorial optimization problems may be formulated as finding certain paths, tours or constrained subgraphs in a graph. Famous problems such as the traveling salesman problem, Chinese postman problem, minimum spanning tree, etc., aim to minimize the “tour” length subject to certain constraints on the nodes visited by the tour. Another class of problems is the prize-collecting traveling salesman/repairman problem which seeks to maximize some function (reward) of the nodes visited during the tour subject to constraints on the tour taken.

This talk will be a survey talk which will introduce a few interesting algorithmic techniques that have been used to tackle problems of the latter flavor. In particular, we will focus on the orienteering problem (and a few generalizations): given a directed edge-weighted graph, source-sink vertices S-T, and a path budget, find an S-T path which visits a maximum number of nodes subject to the weight of the path being within the budget.

I will discuss a surprisingly simple quasi-polynomial algorithm inspired by Savitch’s theorem that achieves a logarithmic approximation to this problem, even when the objective function is any submodular function on the subset of nodes visited. I will further show how to generalize this algorithm to handle the rooted directed tree-orienteering problem, which aims to find a rooted tree instead of an S-T path in the above problem formulation. Finally, time permitting, I hope to discuss connections of these techniques to multi-trees, which are combinatorial structures that have recently been used for quasi-polynomial time approximation algorithms for variants of the directed Steiner Tree problem.

**November 20: Lance Fortnow** **(Distinguished Theory Talk)**. Title: *The Early Days of Interactive Proofs.*

Abstract: Consider the clique problem, given a graph is there a collection of vertices of a given size that are all connected to each other. A powerful wizard could convince you that a large clique exists, just provide the set of vertices and you can check that all pairs are connected. This notion made formal the class NP, the basis of the famous P v NP problem.

What if the wizard wanted to convince you that no clique existed? You would seemingly need to check all large subsets. Thirty years ago, Lund, Fortnow, Karloff and Nisan showed that a wizard can convince a mere mortal, if the mortal can ask random questions. Shamir shortly thereafter extended these results to everything computable in a reasonable amount of memory, the IP = PSPACE result.

This talk will go over the early history of IP = PSPACE and how then new email technologies played a critical role. We then give an overview of how interactive proofs and related probabilistically checkable proofs have transformed how we think about efficient computation.

**November 14: Visu Makam **(Institute for Advanced Study). *Barriers for lifting techniques for tensor rank lower bounds*.

One common set of techniques for lower bounds, which is ancient but whose prominence and use increases with recent successes (and realization that past methods fit this mold) are lifting (or escalation) techniques. Here one aims to derive a lower bound for some strong model, via a reduction to proving a related lower bound on a weaker model.

The two main complexity measures that we will consider are tensor rank (for tensors) and Waring rank (for polynomials). We will show strong barriers to lifting lower bounds from lower degree tensors/polynomials to higher degree tensors/polynomials, significantly generalizing the result in [EGOW18] which gives barriers to lifting matrix rank lower bounds to tensors/polynomials. The key new ingredient is a statement in algebraic geometry very similar to the implicit function theorem that we will explain. This is joint work with Ankit Garg, Rafael Oliveira and Avi Wigderson.

**November 7:** **Adrian Vladu** (Boston University). *Improved Convergence for L∞ and L1 Regression via Iteratively Reweighted Least Squares.* 1:30 PM to 2:30 PM in LWSN 3102.

The iteratively reweighted least squares method (IRLS) is a popular technique used in practice for solving regression problems. Various versions of this method have been proposed, but theoretical analyses usually fail to capture their good practical performance.

I will present a simple and natural version of IRLS for solving L∞ and L1 regression, which provably converges to a (1+ε)-approximate solution in O(m^1/3 log(1/ε)/ε^(2/3) + log(m/ε)/ε^2) iterations, where m is the number of rows of the input matrix. This running time is independent of the conditioning of the input, and up to poly(1/ε) beats the O(m^1/2) iterations required by standard interior-point methods.

This improves upon the highly specialized algorithms of Chin et al. (ITCS ’12), and Christiano et al. (STOC ’11), and yields a truly efficient natural algorithm for the slime mold dynamics (Straszak-Vishnoi, SODA ’16, ITCS ’16, ITCS ’17).

I will also highlight a few connections to packing/covering LP solvers and higher order optimization methods.

Arxiv link: https://arxiv.org/pdf/1902.06391.pdf, appears in ICML 2019.

**October 31:** **Young-San Lin**. *Approximate Welfare Maximization in Combinatorial Auctions (Survey Talk) *. 1:30 PM to 2:30 PM in LWSN 3102.

We explore how social welfare can be approximately maximized in combinatorial auctions when goods are indivisible. Each buyer has a valuation function on a set of items, which is of exponential size thus one must specify how it can be accessed. The types of queries that have been considered are value and demand queries.

We will start with a 2-approximation greedy algorithm for submodular valuations with value queries as a warm up. Then show how demand queries can be used to solve a relaxed linear program (LP) for welfare maximization. Finally, given a fractional solution from the relaxed LP, we will discuss some rounding techniques for subadditive and fractionally subadditive valuations.

**October 24:** **Tamalika Mukherjee**. *Lattice Reduction for Modules, or How to Reduce ModuleSVP to ModuleSVP*. 1:30 PM to 2:30 PM in LWSN 3102.

We show how to generalize lattice reduction algorithms to module lattices in order to reduce γ-approximate ModuleSVP over module lattices with rank k≥2 to γ′-approximate ModuleSVP over module lattices with rank 2≤β≤k. To do so, we modify the celebrated slide-reduction algorithm of Gama and Nguyen to work with module filtrations, a higher-dimensional analogue of the (Z-)basis of a lattice. The particular value of γ that we achieve depends on the underlying number field K, the ring R⊂K, and the embedding (as well as, of course, k and β). However, for reasonable choices of these parameters, the approximation factor that we achieve is surprisingly close to the one achieved by “plain” lattice reduction algorithms, which require an arbitrary SVP oracle in the same dimension.

In other words, we show that ModuleSVP oracles are nearly as useful as SVP oracles for solving approximate ModuleSVP in higher dimensions. Our result generalizes the recent independent result of Lee, Pellet-Mary, Stehlé, and Wallet, which works in the important special case when β=2 and R=OK is the ring of integers of K under the canonical embedding. Indeed, at a high level our reduction can be thought of as a generalization of theirs in roughly the same way that slide reduction generalizes LLL reduction.

**October 17:** **Alex Block**. (Zero-Knowledge) Interactive Proofs (Survey Talk). 1:30PM to 2:30 PM in LWSN 3102.

Abstract: In this talk, we explore the power of interactive proofs and their zero-knowledge variant. The idea of (zero-knowledge) proofs was introduced by Goldwasser, Micali, and Rackoff [GMR89] and has since then been the subject of intense study in various different fields, including complexity theory and cryptography. An interactive proof is a protocol in which a (computationally unbounded) prover convinces a (efficient) verifier that some statement is true. It is crucial that for any true statement, any honest prover can convince a verifier of a with high probability, and for any false statement, any cheating prover cannot convince a verifier the statement is true. This simple setup admits many powerful results; we’ll touch upon a few of these.

We will also discuss zero-knowledge interactive proofs. An interactive proof is zero-knowledge if (roughly speaking) the only information a verifier learns from viewing the transcript is whether or not the statement is true or false. That is, a verifier learns nothing more than the validity of the statement being proven. We’ll conclude with an examination of some results related to zero-knowledge proofs.

**September 25: Shai Vardi** (Purdue). **Title:** How to Hire Secretaries with Stochastic Departures

We study a generalization of the secretary problem, where decisions do not have to be made immediately upon candidates’ arrivals. After arriving, each candidate stays in the system for some (random) amount of time and then leaves, whereupon the algorithm has to decide irrevocably whether to select this candidate or not. The goal is to maximize the probability of selecting the best candidate overall. We assume that the arrival waiting times are drawn from known distributions.

Our first main result is a characterization of the optimal policy for this setting. We show that when deciding whether to select a candidate it suffices to know only the time and the number of candidates that have arrived so far. Furthermore, the policy is monotone non-decreasing in the number of candidates seen so far, and, under certain natural conditions, monotone non-increasing in the time. Our second main result is proving that when the number of candidates is large, a single threshold policy is almost optimal. Joint work with Thomas Kesselheim (University of Bonn) and Alex Psomas (Simons/Purdue CS)

Bio: Shai is an Assistant Professor of Management Information Systems at the Krannert School of Management at Purdue University. He received his PhD in Computer Science from Tel Aviv University, and was a Linde Postdoctoral Fellow at SISL at Caltech. His research interests include game theory, fairness and dynamic mechanism design.

**September 18: Minshen Zhu **(Purdue). **Title:** High-Dimensional Expanders and Applications (Survey talk, paper by Nima Anari, Kuikui Liu, Shayan Oveis Gharan and Cynthia Vinzant, STOC 2019).

Abstract: Expander graphs are very useful objects in theoretical computer science. This is because expander graphs can be studied from different perspectives. We can view expander graphs as: (1) a sparser version of the complete graph satisfying some expansion properties, (2) graphs on which random walks mixes rapidly, (3) “pseudorandom” version of random graphs. High-dimensional expanders (HDXs) are high-dimensional counterparts of such objects, usually defined in terms of simplicial complexes, which can be roughly thought of as “expanding hypergraphs”. While normal expander graphs admit so many (equivalent) characterizations, it is unclear which one should be adopted when generalizaing them to hypergraphs. In this talk I will introduce several definitions of HDXs, including link expanders and random walk expanders, and show some connections between them. I will introduce the so-called “Garland’s method” and use it to prove Oppenheim’s “Trickling Down” Theorem, which relates local expansion with global expansion. If time permits, I will use this result to show the bases-exchange chain of a matroid mixes rapidly, thus giving an FPRAS for sampling bases of a matroid.

**September 10: Akash Kumar** (Purdue). **Title:** Max Cut, SDPs and Sum-of-Squares. (Survey talk).

Abstract: This will mostly be a survey of the literature around Max-Cut, SDPs and Sum-of-Squares. Max-Cut is a classical problem exploring which has led to huge advances in the design of approximation algorithms. Some of the ideas used in the famed Goemans Williamson Rounding (which we will cover) continue influencing including some recent developments — two examples which come to mind are [Charikar-Makarychev-Makarychev-09, Montanari-Zhou 16]. We will use this as an opportunity to revisit the sum-of-squares literature. We will finish with a discussion of some open problems in the area.

Bio: I am a PhD student in the CS department. I am being advised by Professor Suagata Basu.

Abstract: In the past two decades, topological data analysis, an area seeded by computational geometry and topology, has emphasized processing shapes and data from the topological viewpoint in addition to geometry. The understanding of topological structures in the context of computations has resulted into sound algorithms and has also put a thrust in developing further synergy between mathematics and computations in general. This talk aims to explain this perspective by considering some applications in shape and data analysis, namely, (i) surface/manifold reconstructions, (ii) mesh generation, and (iii) feature extraction by persistent homology. For the first two topics, we will emphasize the role of topology, state some of the key results while for the third we will go a little deeper to explain how topology can delineate signal from noise more robustly in a scale-invariant manner. The hope is that the talk will further stimulate the interest in tying topology and computation together in the larger context of data science.

Bio: Tamal K. Dey is a professor of computer science at The Ohio State University where he is currently serving as the Interim Chair of the CSE department. His research interest includes computational geometry, computational topology and their applications to geometric modeling and data analysis. He finished his PhD from Purdue University in 1991. Prior to joining OSU, he held academic positions at University of Illinois at Urbana Champaign, Indiana University-Purdue University at Indianapolis, Indian Institute of Technology, Kharagpur, India, and Max-Planck Institute, Germany. He has (co)authored two books “Curve and surface reconstruction: Algorithms with Mathematical Analysis” published by Cambridge University Press and “Delaunay Mesh Generation” published by CRC Press. (Co)author of more than 200 scientific articles, Dey is an IEEE and ACM Fellow. He serves in various editorial and executive boards and routinely gives invited lectures at various academic forums. He leads the Jyamiti research group at OSU and heads the recent NSF sponsored TRIPODS Phase I Institute at OSU. Details can be found at https://web.cse.ohio-state.edu/~dey.8.

**August 29: Arturs Backurs** (Toyota Technological Institute at Chicago). **Title**: Efficient Density Evaluation for Smooth Kernels

Abstract: Given a kernel function k and a dataset P (of points from R^d), the kernel density function of P at a point q from R^d is equal to KDF_P(q) := 1/|P| sum_{p in P} k(p,q). Kernel density evaluation has numerous applications, in scientific computing, statistics, computer vision, machine learning and other fields. In all of them it is necessary to evaluate KDF_P(q) quickly, often for many inputs q and large point-sets P. In this paper we present a collection of algorithms for efficient KDF evaluation under the assumptions that the kernel k is “smooth”, i.e., the value changes at most polynomially with the distance. This assumption is satisfied by several well-studied kernels, including the (generalized) t-student kernel and rational quadratic kernel. For smooth kernels, we give a data structure that, after O(dn log(Phi n)/eps^2) preprocessing, estimates KDF_P(q) up to a factor of 1+eps in O(d log (Phi n)/eps^2) time, where Phi is the aspect ratio. The log(Phi n) term can be further replaced by log n under an additional decay condition on k, which is satisfied by the aforementioned examples.

We further extend the results in two ways. First, we use low-distortion embeddings to extend the results to kernels defined for spaces other than l_2. The key feature of this reduction is that the distortion of the embedding affects only the running time of the algorithm, not the accuracy of the estimation. As a result, we obtain (1+eps)-approximate estimation algorithms for kernels over other l_p norms, Earth-Mover Distance, and other metric spaces. Second, for smooth kernels that are decreasing with distance, we present a general reduction from density estimation to approximate near neighbor in the underlying space. This allows us to construct algorithms for general doubling metrics, as well as alternative algorithms for l_p norms and other spaces.

Joint work with Moses Charikar, Piotr Indyk and Paris Siminelakis.

Bio: Arturs Backurs is a Research Assistant Professor at the Toyota Technological Institute at Chicago (TTIC). He received Ph.D. degree from MIT in 2018. He previously received his bachelor’s degree from University of Latvia in 2012.

His research area is theoretical computer science with the main focus on proving conditional lower bounds for problems that are solvable in polynomial time as well as designing faster algorithms inspired by the hardness results.