# Search SciRate

We consider a parallel version of a classical Bayesian search problem. $k$ agents are looking for a treasure that is placed in one of the boxes indexed by $\mathbb{N}^+$ according to a known distribution $p$. The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can "peek" into a box. A basic family of algorithms which are inherently robust is \emphnon-coordinating algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination. We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to $p$, has expected running time of at most $10+4(1+\frac{1}{k})^2 T$, where $T$ is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by Fraigniaud, Korman and Rodeh in STOC'16, which was designed for the context of linear parallel search.We then describe an optimal non-coordinating algorithm for the case where the distribution $p$ is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where $p$ is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially $2$ times worse than the coordinating algorithm.We also show simple algorithms for Pareto distributions over $M$ boxes. That is, in the case where $p(x) \sim 1/x^b$ for $0< b < 1$, we suggest the following algorithm: at step $t$ choose uniformly from the boxes unchecked in ${1, . . . ,min(M, \lfloor t/\sigma\rfloor)}$, where $\sigma = b/(b + k - 1)$. It turns out this algorithm is asymptotically optimal, and runs about $2+b$ times worse than the case of full coordination.
• We introduce the dependent doors problem as an abstraction for situations in which one must perform a sequence of possibly dependent decisions, without receiving feedback information on the effectiveness of previously made actions. Informally, the problem considers a set of $d$ doors that are initially closed, and the aim is to open all of them as fast as possible. To open a door, the algorithm knocks on it and it might open or not according to some probability distribution. This distribution may depend on which other doors are currently open, as well as on which other doors were open during each of the previous knocks on that door. The algorithm aims to minimize the expected time until all doors open. Crucially, it must act at any time without knowing whether or which other doors have already opened. In this work, we focus on scenarios where dependencies between doors are both positively correlated and acyclic.The fundamental distribution of a door describes the probability it opens in the best of conditions (with respect to other doors being open or closed). We show that if in two configurations of $d$ doors corresponding doors share the same fundamental distribution, then these configurations have the same optimal running time up to a universal constant, no matter what are the dependencies between doors and what are the distributions. We also identify algorithms that are optimal up to a universal constant factor. For the case in which all doors share the same fundamental distribution we additionally provide a simpler algorithm, and a formula to calculate its running time. We furthermore analyse the price of lacking feedback for several configurations governed by standard fundamental distributions. In particular, we show that the price is logarithmic in $d$ for memoryless doors, but can potentially grow to be linear in $d$ for other distributions.We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question. Here, we study the case of two cascading memoryless doors. That is, the first door opens on each knock independently with probability $p\_1$. The second door can only open if the first door is open, in which case it will open on each knock independently with probability $p\_2$. We solve this problem almost completely by identifying algorithms that are optimal up to an additive term of 1.
We consider a search problem on trees using unreliable guiding instructions. Specifically, an agent starts a search at the root of a tree aiming to find a treasure hidden at one of the nodes by an adversary. Each visited node holds information, called advice, regarding the most promising neighbor to continue the search. However, the memory holding this information may be unreliable. Modeling this scenario, we focus on a probabilistic setting. That is, the advice at a node is a pointer to one of its neighbors. With probability $q$ each node is faulty, independently of other nodes, in which case its advice points at an arbitrary neighbor, chosen u.a.r. Otherwise, the node is sound and necessarily points at the correct neighbor. Crucially, the advice is permanent, in the sense that querying a node several times would yield the same answer. We evaluate the agent's efficiency by two measures: The move complexity denotes the expected number of edge traversals, and the query complexity denotes the expected number of queries. Let $\Delta$ denote the maximal degree. Roughly speaking, the main message of this paper is that in order to obtain efficient search, $1/\sqrt{\Delta}$ is a threshold for the noise parameter $q$. Essentially, we prove that above the threshold, every search algorithm has query complexity (and move complexity) which is both exponential in the depth $d$ of the treasure and polynomial in the number of nodes $n$. Conversely, below the threshold, there exists an algorithm with move complexity $O(d\sqrt{\Delta})$, and an algorithm with query complexity $O(\sqrt{\Delta}\log \Delta \log^2 n)$. Moreover, for the case of regular trees, we obtain an algorithm with query complexity $O(\sqrt{\Delta}\log n\log\log n)$. The move complexity bound is tight below the threshold and the query complexity bounds are not far from the lower bound of $\Omega(\sqrt{\Delta}\log_\Delta n)$.
• In STOC'16, Fraigniaud et al. consider the problem of finding a treasure hidden in one of many boxes that are ordered by importance. That is, if a treasure is in a more important box, then one would like to find it faster. Assuming there are many searchers, the authors suggest that using an algorithm that requires no coordination between searchers can be highly beneficial. Indeed, besides saving the need for a communication and coordination mechanism, such algorithms enjoy inherent robustness. The authors proceed to solve this linear search problem in the case of countably many boxes and an adversary placed treasure, and prove that the best speed-up possible by $k$ non-coordinating searchers is precisely $\frac{k}{4}(1+1/k)^2$. In particular, this means that asymptotically, the speed-up is four times worse compared to the case of full coordination. We suggest an important variant of the problem, where the treasure is placed uniformly at random in one of a finite, large, number of boxes. We devise non-coordinating algorithms that achieve a speed-up of $6/5$ for two searchers, a speed-up of $3/2$ for three searchers, and in general, a speed-up of $k(k+1)/(3k-1)$ for any $k \geq 1$ searchers. Thus, as $k$ grows to infinity, the speed-up approaches three times worse compared to the case of full coordination. Moreover, these bounds are tight in a strong sense as no non-coordinating search algorithm for $k$ searchers can achieve better speed-ups. We also devise non-coordinating algorithms that use only logarithmic memory in the size of the search domain, and yet, asymptotically, achieve the optimal speed-up. Finally, we note that all our algorithms are extremely simple and hence applicable.
• We analyze parallel algorithms in the context of exhaustive search over totally ordered sets. Imagine an infinite list of "boxes", with a "treasure" hidden in one of them, where the boxes' order reflects the importance of finding the treasure in a given box. At each time step, a search protocol executed by a searcher has the ability to peek into one box, and see whether the treasure is present or not. By equally dividing the workload between them, $k$ searchers can find the treasure $k$ times faster than one searcher. However, this straightforward strategy is very sensitive to failures (e.g., crashes of processors), and overcoming this issue seems to require a large amount of communication. We therefore address the question of designing parallel search algorithms maximizing their speed-up and maintaining high levels of robustness, while minimizing the amount of resources for coordination. Based on the observation that algorithms that avoid communication are inherently robust, we analyze the best running time performance of non-coordinating algorithms. Specifically, we devise non-coordinating algorithms that achieve a speed-up of $9/8$ for two searchers, a speed-up of $4/3$ for three searchers, and in general, a speed-up of $\frac{k}{4}(1+1/k)^2$ for any $k\geq 1$ searchers. Thus, asymptotically, the speed-up is only four times worse compared to the case of full-coordination, and our algorithms are surprisingly simple and hence applicable. Moreover, these bounds are tight in a strong sense as no non-coordinating search algorithm can achieve better speed-ups. Overall, we highlight that, in faulty contexts in which coordination between the searchers is technically difficult to implement, intrusive with respect to privacy, and/or costly in term of resources, it might well be worth giving up on coordination, and simply run our non-coordinating exhaustive search algorithms.
• The difference between the speed of the actions of different processes is typically considered as an obstacle that makes the achievement of cooperative goals more difficult. In this work, we aim to highlight potential benefits of such asynchrony phenomena to tasks involving symmetry breaking. Specifically, in this paper, identical (except for their speeds) mobile agents are placed at arbitrary locations on a cycle of length $n$ and use their speed difference in order to rendezvous fast. We normalize the speed of the slower agent to be 1, and fix the speed of the faster agent to be some $c>1$. (An agent does not know whether it is the slower agent or the faster one.) The straightforward distributed-race DR algorithm is the one in which both agents simply start walking until rendezvous is achieved. It is easy to show that, in the worst case, the rendezvous time of DR is $n/(c-1)$. Note that in the interesting case, where $c$ is very close to 1 this bound becomes huge. Our first result is a lower bound showing that, up to a multiplicative factor of 2, this bound is unavoidable, even in a model that allows agents to leave arbitrary marks, even assuming sense of direction, and even assuming $n$ and $c$ are known to agents. That is, we show that under such assumptions, the rendezvous time of any algorithm is at least $\frac{n}{2(c-1)}$ if $c\leq 3$ and slightly larger if $c>3$. We then construct an algorithm that precisely matches the lower bound for the case $c\leq 2$, and almost matches it when $c>2$. Moreover, our algorithm performs under weaker assumptions than those stated above, as it does not assume sense of direction, and it allows agents to leave only a single mark (a pebble) and only at the place where they start the execution. Finally, we investigate the setting in which no marks can be used at all, and show tight bounds for $c\leq 2$, and almost tight bounds for $c>2$.