results for au:Alman_J in:cs

- In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures. We first present a lower bound for the \emphonline set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. Then we apply the online communication model to data structure lower bounds by studying the Group Range Problem, a dynamic data structure problem. This problem admits an $O(\log n)$-time worst-case data structure. Using online communication complexity, we prove a tight cell-probe lower bound: spending $o(\log n)$ (even amortized) time per operation results in at best an $\exp(-\delta^2 n)$ probability of correctly answering a $(1/2+\delta)$-fraction of the $n$ queries.
- Nov 18 2016 cs.CC arXiv:1611.05558v2We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measures the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds, including: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the $2^n \times 2^n$ Walsh-Hadamard transform $H_n$ (a.k.a. Sylvester matrices, or the communication matrix of Inner Product mod 2), we show how to modify only $2^{\epsilon n}$ entries in each row and make the rank drop below $2^{n(1-\Omega(\epsilon^2/\log(1/\epsilon)))}$, for all $\epsilon > 0$, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices, via L. Valiant's matrix rigidity approach. We also show non-trivial rigidity upper bounds for $H_n$ with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. We show that explicit $n \times n$ Boolean matrices which maintain rank at least $2^{(\log n)^{1-\delta}}$ after $n^2/2^{(\log n)^{\delta/2}}$ modified entries would yield a function lacking sub-quadratic-size $AC^0$ circuits with two layers of arbitrary linear threshold gates. We also prove that explicit 0/1 matrices over $\mathbb{R}$ which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply strong lower bounds for the infamously difficult class $THR\circ THR$.
- We design new polynomials for representing threshold functions in three different regimes: probabilistic polynomials of low degree, which need far less randomness than previous constructions, polynomial threshold functions (PTFs) with "nice" threshold behavior and degree almost as low as the probabilistic polynomials, and a new notion of probabilistic PTFs where we combine the above techniques to achieve even lower degree with similar "nice" threshold behavior. Utilizing these polynomial constructions, we design faster algorithms for a variety of problems: $\bullet$ Offline Hamming Nearest (and Furthest) Neighbors: Given $n$ red and $n$ blue points in $d$-dimensional Hamming space for $d=c\log n$, we can find an (exact) nearest (or furthest) blue neighbor for every red point in randomized time $n^{2-1/O(\sqrt{c}\log^{2/3}c)}$ or deterministic time $n^{2-1/O(c\log^2c)}$. These also lead to faster MAX-SAT algorithms for sparse CNFs. $\bullet$ Offline Approximate Nearest (and Furthest) Neighbors: Given $n$ red and $n$ blue points in $d$-dimensional $\ell_1$ or Euclidean space, we can find a $(1+\epsilon)$-approximate nearest (or furthest) blue neighbor for each red point in randomized time near $dn+n^{2-\Omega(\epsilon^{1/3}/\log(1/\epsilon))}$. $\bullet$ SAT Algorithms and Lower Bounds for Circuits With Linear Threshold Functions: We give a satisfiability algorithm for $AC^0[m]\circ LTF\circ LTF$ circuits with a subquadratic number of linear threshold gates on the bottom layer, and a subexponential number of gates on the other layers, that runs in deterministic $2^{n-n^\epsilon}$ time. This also implies new circuit lower bounds for threshold circuits. We also give a randomized $2^{n-n^\epsilon}$-time SAT algorithm for subexponential-size $MAJ\circ AC^0\circ LTF\circ AC^0\circ LTF$ circuits, where the top $MAJ$ gate and middle $LTF$ gates have $O(n^{6/5-\delta})$ fan-in.
- We show how to compute any symmetric Boolean function on $n$ variables over any field (as well as the integers) with a probabilistic polynomial of degree $O(\sqrt{n \log(1/\epsilon)})$ and error at most $\epsilon$. The degree dependence on $n$ and $\epsilon$ is optimal, matching a lower bound of Razborov (1987) and Smolensky (1987) for the MAJORITY function. The proof is constructive: a low-degree polynomial can be efficiently sampled from the distribution. This polynomial construction is combined with other algebraic ideas to give the first subquadratic time algorithm for computing a (worst-case) batch of Hamming distances in superlogarithmic dimensions, exactly. To illustrate, let $c(n) : \mathbb{N} \rightarrow \mathbb{N}$. Suppose we are given a database $D$ of $n$ vectors in $\{0,1\}^{c(n) \log n}$ and a collection of $n$ query vectors $Q$ in the same dimension. For all $u \in Q$, we wish to compute a $v \in D$ with minimum Hamming distance from $u$. We solve this problem in $n^{2-1/O(c(n) \log^2 c(n))}$ randomized time. Hence, the problem is in "truly subquadratic" time for $O(\log n)$ dimensions, and in subquadratic time for $d = o((\log^2 n)/(\log \log n)^2)$. We apply the algorithm to computing pairs with maximum inner product, closest pair in $\ell_1$ for vectors with bounded integer entries, and pairs with maximum Jaccard coefficients.