results for au:Kuhn_F in:cs

- We present a randomized distributed algorithm that computes a $\Delta$-coloring in any non-complete graph with maximum degree $\Delta \geq 4$ in $O(\log \Delta) + 2^{O(\sqrt{\log\log n})}$ rounds, as well as a randomized algorithm that computes a $\Delta$-coloring in $O((\log \log n)^2)$ rounds when $\Delta \in [3, O(1)]$. Both these algorithms improve on an $O(\log^3 n/\log \Delta)$-round algorithm of Panconesi and Srinivasan~[STOC'1993], which has remained the state of the art for the past 25 years. Moreover, the latter algorithm gets (exponentially) closer to an $\Omega(\log\log n)$ round lower bound of Brandt et al.~[STOC'16].
- The present paper studies local distributed graph problems in highly dynamic networks. We define a (in our view) natural generalization of static graph problems to the dynamic graph setting. For some parameter $T>0$, the set of admissible outputs of nodes in a $T$-dynamic solution for a given graph problem at some time $t$ is defined by the dynamic graph topology in the time interval $[t-T,t]$. The guarantees of a $T$-dynamic solution become stronger the more stable the graph remains during the interval $[t-T,t]$ and they coincide with the definition of the static graph problem if the graph is static throughout the interval. We further present an abstract framework that allows to develop distributed algorithms for a given dynamic graph problem. For some $T>0$, the algorithms always output a valid $T$-dynamic solution of the given graph problem. Further, if a constant neighborhood around some part of the graph is stable during an interval $[t_1,t_2]$, the algorithms compute a static solution for this part of the graph throughout the interval $[t_1+T',t_2]$ for some $T'>0$. Ideally $T$ and $T'$ are of the same asymptotic order as the time complexity for solving the given graph problem in static networks. We apply our generic framework to two classic distributed symmetry breaking problems: the problem of computing a (degree+1)-vertex coloring and the problem of computing a maximal independent set (MIS) of the network graph. For both problems, we obtain distributed algorithms that always output a valid $O(\log n)$-dynamic solution. Further, if some part of the graph and its $O(1)$-neighborhood remain stable for some interval $[t_1,t_2]$, for the given part of the graph, the algorithms compute a valid static solution for the two problems that remains stable throughout an interval $[t_1+O(\log n),t_2]$.
- We present a deterministic distributed algorithm, in the LOCAL model, that computes a $(1+o(1))\Delta$-edge-coloring in polylogarithmic-time, so long as the maximum degree $\Delta=\tilde{\Omega}(\log n)$. For smaller $\Delta$, we give a polylogarithmic-time $3\Delta/2$-edge-coloring. These are the first deterministic algorithms to go below the natural barrier of $2\Delta-1$ colors, and they improve significantly on the recent polylogarithmic-time $(2\Delta-1)(1+o(1))$-edge-coloring of Ghaffari and Su [SODA'17] and the $(2\Delta-1)$-edge-coloring of Fischer, Ghaffari, and Kuhn [FOCS'17], positively answering the main open question of the latter. The key technical ingredient of our algorithm is a simple and novel gradual packing of judiciously chosen near-maximum matchings, each of which becomes one of the color classes.
- The gap between the known randomized and deterministic local distributed algorithms underlies arguably the most fundamental and central open question in distributed graph algorithms. In this paper, we develop a generic and clean recipe for derandomizing randomized LOCAL algorithms and transforming them into efficient deterministic LOCAL algorithm. We also exhibit how this simple recipe leads to significant improvements on a number of problems, in cases resolving known open problems. Two sample end-results are as follows: - An improved distributed hypergraph maximal matching algorithm, which improves on that of Fischer, Ghaffari, and Kuhn [FOCS'17], and leads to improved algorithms for edge-coloring, maximum matching approximation, and low out-degree edge orientation. The first gives an improved algorithm for Open Problem 11.4 of the book of Barenboim and Elkin, and the last gives the first positive resolution of their Open Problem 11.10. - An improved distributed algorithm for the Lovász Local Lemma, which gets closer to a conjecture of Chang and Pettie [FOCS'17], and moreover leads to improved distributed algorithms for problems such as defective coloring and $k$-SAT.
- Jul 28 2017 cs.DS arXiv:1707.08807v1We investigate the nearest common ancestor (NCA) function in rooted trees. As the main conceptual contribution, the paper introduces universal trees for the NCA function: For a given family of rooted trees, an NCA-universal tree $S$ is a rooted tree such that any tree $T$ of the family can be embedded into $S$ such that the embedding of the NCA in $T$ of two nodes of $T$ is equal to the NCA in $S$ of the embeddings of the two nodes. As the main technical result we give explicit constructions of NCA-universal trees of size $n^{2.318}$ for the family of rooted $n$-vertex trees and of size $n^{1.894}$ for the family of rooted binary $n$-vertex trees. A direct consequence is the explicit construction of NCA-labeling schemes with labels of size $2.318\log_2 n$ and $1.894\log_2 n$ for the two families of rooted trees. This improves on the best known such labeling schemes established by Alstrup, Halvorsen and Larsen [SODA 2014].
- The degree splitting problem requires coloring the edges of a graph red or blue such that each node has almost the same number of edges in each color, up to a small additive discrepancy. The directed variant of the problem requires orienting the edges such that each node has almost the same number of incoming and outgoing edges, again up to a small additive discrepancy. We present deterministic distributed algorithms for both variants, which improve on their counterparts presented by Ghaffari and Su [SODA'17]: our algorithms are significantly simpler and faster, and have a much smaller discrepancy. This also leads to a faster and simpler deterministic algorithm for $(2+o(1))\Delta$-edge-coloring, improving on that of Ghaffari and Su.
- The Arrow protocol is a simple and elegant protocol to coordinate exclusive access to a shared object in a network. The protocol solves the underlying distributed queueing problem by using path reversal on a pre-computed spanning tree (or any other tree topology simulated on top of the given network). It is known that the Arrow protocol solves the problem with a competitive ratio of O(log D) on trees of diameter D. This implies a distributed queueing algorithm with competitive ratio O(s*log D) for general networks with a spanning tree of diameter D and stretch s. In this work we show that when running the Arrow protocol on top of the well-known probabilistic tree embedding of Fakcharoenphol, Rao, and Talwar [STOC 03], we obtain a randomized distributed queueing algorithm with a competitive ratio of O(log n) even on general network topologies. The result holds even if the queueing requests occur in an arbitrarily dynamic and concurrent fashion and even if communication is asynchronous. From a technical point of view, the main of the paper shows that the competitive ratio of the Arrow protocol is constant on a special family of tree topologies, known as hierarchically well separated trees.
- Apr 11 2017 cs.DS arXiv:1704.02767v1We present a deterministic distributed algorithm that computes a $(2\Delta-1)$-edge-coloring, or even list-edge-coloring, in any $n$-node graph with maximum degree $\Delta$, in $O(\log^7 \Delta \log n)$ rounds. This answers one of the long-standing open questions of \emphdistributed graph algorithms from the late 1980s, which asked for a polylogarithmic-time algorithm. See, e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and Elkin. The previous best round complexities were $2^{O(\sqrt{\log n})}$ by Panconesi and Srinivasan [STOC'92] and $\tilde{O}(\sqrt{\Delta}) + O(\log^* n)$ by Fraigniaud, Heinrich, and Kosowski [FOCS'16]. A corollary of our deterministic list-edge-coloring also improves the randomized complexity of $(2\Delta-1)$-edge-coloring to poly$(\log\log n)$ rounds. The key technical ingredient is a deterministic distributed algorithm for \emphhypergraph maximal matching, which we believe will be of interest beyond this result. In any hypergraph of rank $r$ --- where each hyperedge has at most $r$ vertices --- with $n$ nodes and maximum degree $\Delta$, this algorithm computes a maximal matching in $O(r^5 \log^{6+\log r } \Delta \log n)$ rounds. This hypergraph matching algorithm and its extensions lead to a number of other results. In particular, a polylogarithmic-time deterministic distributed maximal independent set algorithm for graphs with bounded neighborhood independence, hence answering Open Problem 5 of Barenboim and Elkin's book, a $((\log \Delta/\varepsilon)^{O(\log (1/\varepsilon))})$-round deterministic algorithm for $(1+\varepsilon)$-approximation of maximum matching, and a quasi-polylogarithmic-time deterministic distributed algorithm for orienting $\lambda$-arboricity graphs with out-degree at most $(1+\varepsilon)\lambda$, for any constant $\varepsilon>0$, hence partially answering Open Problem 10 of Barenboim and Elkin's book.
- Mar 20 2017 cs.DC arXiv:1703.06130v1Cognitive radio networks are a new type of multi-channel wireless network in which different nodes can have access to different sets of channels. By providing multiple channels, they improve the efficiency and reliability of wireless communication. However, the heterogeneous nature of cognitive radio networks also brings new challenges to the design and analysis of distributed algorithms. In this paper, we focus on two fundamental problems in cognitive radio networks: neighbor discovery, and global broadcast. We consider a network containing $n$ nodes, each of which has access to $c$ channels. We assume the network has diameter $D$, and each pair of neighbors have at least $k\geq 1$, and at most $k_{max}\leq c$, shared channels. We also assume each node has at most $\Delta$ neighbors. For the neighbor discovery problem, we design a randomized algorithm CSeek which has time complexity $\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta)$. CSeek is flexible and robust, which allows us to use it as a generic "filter" to find "well-connected" neighbors with an even shorter running time. We then move on to the global broadcast problem, and propose CGCast, a randomized algorithm which takes $\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta+D\cdot\Delta)$ time. CGCast uses CSeek to achieve communication among neighbors, and uses edge coloring to establish an efficient schedule for fast message dissemination. Towards the end of the paper, we give lower bounds for solving the two problems. These lower bounds demonstrate that in many situations, CSeek and CGCast are near optimal.
- Nov 09 2016 cs.DS arXiv:1611.02663v2This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and $(\Delta+1)$-vertex coloring), the randomized complexity is at most polylogarithmic in the size $n$ of the network, while the best deterministic complexity is typically $2^{O(\sqrt{\log n})}$. Understanding and narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in $\log^{O(1)} n$ rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and $(\Delta+1)$-coloring) in $\log^{O(1)} n$ rounds in the LOCAL model. We show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values.
- Oct 11 2016 cs.DC arXiv:1610.02931v1We continue the recent line of research studying information dissemination problems in adversarial dynamic radio networks. We give two generic algorithms which allow to transform generalized version of single-message broadcast algorithms into multi-message broadcast algorithms. Based on these generic algorithms, we obtain multi-message broadcast algorithms for dynamic radio networks for a number of different dynamic network settings. For one of the modeling assumptions, our algorithms are complemented by a lower bound which shows that the upper bound is close to optimal.
- Jul 19 2016 cs.DC arXiv:1607.05212v2We show an $\Omega\big(\Delta^{\frac{1}{3}-\frac{\eta}{3}}\big)$ lower bound on the runtime of any deterministic distributed $\mathcal{O}\big(\Delta^{1+\eta}\big)$-graph coloring algorithm in a weak variant of the \LOCAL model. In particular, given a network graph \mbox$G=(V,E)$, in the weak \LOCAL model nodes communicate in synchronous rounds and they can use unbounded local computation. We assume that the nodes have no identifiers, but that instead, the computation starts with an initial valid vertex coloring. A node can \textbfbroadcast a \textbfsingle message of \textbfunbounded size to its neighbors and receives the \textbfset of messages sent to it by its neighbors. That is, if two neighbors of a node $v\in V$ send the same message to $v$, $v$ will receive this message only a single time; without any further knowledge, $v$ cannot know whether a received message was sent by only one or more than one neighbor. Neighborhood graphs have been essential in the proof of lower bounds for distributed coloring algorithms, e.g., \citelinial92,Kuhn2006On. Our proof analyzes the recursive structure of the neighborhood graph of the respective model to devise an $\Omega\big(\Delta^{\frac{1}{3}-\frac{\eta}{3}}\big)$ lower bound on the runtime for any deterministic distributed $\mathcal{O}\big(\Delta^{1+\eta}\big)$-graph coloring algorithm. Furthermore, we hope that the proof technique improves the understanding of neighborhood graphs in general and that it will help towards finding a lower (runtime) bound for distributed graph coloring in the standard \LOCAL model. Our proof technique works for one-round algorithms in the standard \LOCAL model and provides a simpler and more intuitive proof for an existing $\Omega(\Delta^2)$ lower bound.
- Jan 11 2016 cs.DC arXiv:1601.01912v1We study the single-message broadcast problem in dynamic radio networks. We show that the time complexity of the problem depends on the amount of stability and connectivity of the dynamic network topology and on the adaptiveness of the adversary providing the dynamic topology. More formally, we model communication using the standard graph-based radio network model. To model the dynamic network, we use a generalization of the synchronous dynamic graph model introduced in [Kuhn et al., STOC 2010]. For integer parameters $T\geq 1$ and $k\geq 1$, we call a dynamic graph $T$-interval $k$-connected if for every interval of $T$ consecutive rounds, there exists a $k$-vertex-connected stable subgraph. Further, for an integer parameter $\tau\geq 0$, we say that the adversary providing the dynamic network is $\tau$-oblivious if for constructing the graph of some round $t$, the adversary has access to all the randomness (and states) of the algorithm up to round $t-\tau$. As our main result, we show that for any $T\geq 1$, any $k\geq 1$, and any $\tau\geq 1$, for a $\tau$-oblivious adversary, there is a distributed algorithm to broadcast a single message in time $O\big(\big(1+\frac{n}{k\cdot\min\left\{\tau,T\right\}}\big)\cdot n\log^3 n\big)$. We further show that even for large interval $k$-connectivity, efficient broadcast is not possible for the usual adaptive adversaries. For a $1$-oblivious adversary, we show that even for any $T\leq (n/k)^{1-\varepsilon}$ (for any constant $\varepsilon>0$) and for any $k\geq 1$, global broadcast in $T$-interval $k$-connected networks requires at least $\Omega(n^2/(k^2\log n))$ time. Further, for a $0$ oblivious adversary, broadcast cannot be solved in $T$-interval $k$-connected networks as long as $T<n-k$.
- We present a near-optimal distributed algorithm for $(1+o(1))$-approximation of single-commodity maximum flow in undirected weighted networks that runs in $(D+ \sqrt{n})\cdot n^{o(1)}$ communication rounds in the \Congest model. Here, $n$ and $D$ denote the number of nodes and the network diameter, respectively. This is the first improvement over the trivial bound of $O(n^2)$, and it nearly matches the $\tilde{\Omega}(D+ \sqrt{n})$ round complexity lower bound. The development of the algorithm contains two results of independent interest: (i) A $(D+\sqrt{n})\cdot n^{o(1)}$-round distributed construction of a spanning tree of average stretch $n^{o(1)}$. (ii) A $(D+\sqrt{n})\cdot n^{o(1)}$-round distributed construction of an $n^{o(1)}$-congestion approximator consisting of the cuts induced by $O(\log n)$ virtual trees. The distributed representation of the cut approximator allows for evaluation in $(D+\sqrt{n})\cdot n^{o(1)}$ rounds. All our algorithms make use of randomization and succeed with high probability.
- Aug 19 2015 cs.DC arXiv:1508.04390v1Daum et al. [PODC'13] presented an algorithm that computes a maximal independent set (MIS) within $O(\log^2 n/F+\log n \mathrm{polyloglog} n)$ rounds in an $n$-node multichannel radio network with $F$ communication channels. The paper uses a multichannel variant of the standard graph-based radio network model without collision detection and it assumes that the network graph is a polynomially bounded independence graph (BIG), a natural combinatorial generalization of well-known geographic families. The upper bound of that paper is known to be optimal up to a polyloglog factor. In this paper, we adapt algorithm and analysis to improve the result in two ways. Mainly, we get rid of the polyloglog factor in the runtime and we thus obtain an asymptotically optimal multichannel radio network MIS algorithm. In addition, our new analysis allows to generalize the class of graphs from those with polynomially bounded local independence to graphs where the local independence is bounded by an arbitrary function of the neighborhood radius.
- Jun 03 2015 cs.DC arXiv:1506.00828v2In the classic gossip-based model of communication for disseminating information in a network, in each time unit, every node $u$ is allowed to contact a single random neighbor $v$. If $u$ knows the data (rumor) to be disseminated, it disperses it to $v$ (known as PUSH) and if it does not, it requests it from $v$ (known as PULL). While in the classic gossip model, each node is only allowed to contact a single neighbor in each time unit, each node can possibly be contacted by many neighboring nodes. In the present paper, we consider a restricted model where at each node only one incoming request can be served. As long as only a single piece of information needs to be disseminated, this does not make a difference for push requests. It however has a significant effect on pull requests. In the paper, we therefore concentrate on this weaker pull version, which we call 'restricted pull'. We distinguish two versions of the restricted pull protocol depending on whether the request to be served among a set of pull requests at a given node is chosen adversarially or uniformly at random. As a first result, we prove an exponential separation between the two variants. We show that there are instances where if an adversary picks the request to be served, the restricted pull protocol requires a polynomial number of rounds whereas if the winning request is chosen uniformly at random, the restricted pull protocol only requires a polylogarithmic number of rounds to inform the whole network. Further, as the main technical contribution, we show that if the request to be served is chosen randomly, the slowdown of using restricted pull versus using the classic pull protocol can w.h.p. be upper bounded by $O(\Delta / \delta \log n)$, where $\Delta$ and $\delta$ are the largest and smallest degree of the network.
- Apr 23 2014 cs.DS arXiv:1404.5510v2We study an online problem in which a set of mobile servers have to be moved in order to efficiently serve a set of requests that arrive in an online fashion. More formally, there is a set of $n$ nodes and a set of $k$ mobile servers that are placed at some of the nodes. Each node can potentially host several servers and the servers can be moved between the nodes. There are requests $1,2,\ldots$ that are adversarially issued at nodes one at a time. An issued request at time $t$ needs to be served at all times $t' \geq t$. The cost for serving the requests is a function of the number of servers and requests at the different nodes. The requirements on how to serve the requests are governed by two parameters $\alpha\geq 1$ and $\beta\geq 0$. An algorithm needs to guarantee at all times that the total service cost remains within a multiplicative factor of $\alpha$ and an additive term $\beta$ of the current optimal service cost. We consider online algorithms for two different minimization objectives. We first consider the natural problem of minimizing the total number of server movements. We show that in this case for every $k$, the competitive ratio of every deterministic online algorithm needs to be at least $\Omega(n)$. Given this negative result, we then extend the minimization objective to also include the current service cost. We give almost tight bounds on the competitive ratio of the online problem where one needs to minimize the sum of the total number of movements and the current service cost. In particular, we show that at the cost of an additional additive term which is roughly linear in $k$, it is possible to achieve a multiplicative competitive ratio of $1+\varepsilon$ for every constant $\varepsilon>0$.
- We present time-efficient distributed algorithms for decomposing graphs with large edge or vertex connectivity into multiple spanning or dominating trees, respectively. As their primary applications, these decompositions allow us to achieve information flow with size close to the connectivity by parallelizing it along the trees. More specifically, our distributed decomposition algorithms are as follows: (I) A decomposition of each undirected graph with vertex-connectivity $k$ into (fractionally) vertex-disjoint weighted dominating trees with total weight $\Omega(\frac{k}{\log n})$, in $\widetilde{O}(D+\sqrt{n})$ rounds. (II) A decomposition of each undirected graph with edge-connectivity $\lambda$ into (fractionally) edge-disjoint weighted spanning trees with total weight $\lceil\frac{\lambda-1}{2}\rceil(1-\varepsilon)$, in $\widetilde{O}(D+\sqrt{n\lambda})$ rounds. We also show round complexity lower bounds of $\tilde{\Omega}(D+\sqrt{\frac{n}{k}})$ and $\tilde{\Omega}(D+\sqrt{\frac{n}{\lambda}})$ for the above two decompositions, using techniques of [Das Sarma et al., STOC'11]. Moreover, our vertex-connectivity decomposition extends to centralized algorithms and improves the time complexity of [Censor-Hillel et al., SODA'14] from $O(n^3)$ to near-optimal $\tilde{O}(m)$. As corollaries, we also get distributed oblivious routing broadcast with $O(1)$-competitive edge-congestion and $O(\log n)$-competitive vertex-congestion. Furthermore, the vertex connectivity decomposition leads to near-time-optimal $O(\log n)$-approximation of vertex connectivity: centralized $\widetilde{O}(m)$ and distributed $\tilde{O}(D+\sqrt{n})$. The former moves toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an $O(m)$ centralized exact algorithm while the latter is the first distributed vertex connectivity approximation.
- We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, $O(\log n)$ bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any $\epsilon \in (0, 1)$, with high probability finds a cut of size at most $O(\epsilon^{-1}\lambda)$ in $O(D) + \tilde{O}(n^{1/2 + \epsilon})$ rounds, where $\lambda$ is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most $(2+\epsilon)\lambda$ in $\tilde{O}((D+\sqrt{n})/\epsilon^5)$ rounds for any $\epsilon>0$. The time complexities of both of these algorithms almost match the $\tilde{\Omega}(D + \sqrt{n})$ lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which $O(w\log n)$ bits can be transmitted in each round over an edge of weight $w$), even if the diameter is $D=O(\log n)$. For unweighted simple graphs, we show that even for networks of diameter $\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}})$, finding an $\alpha$-approximate minimum cut in networks of edge connectivity $\lambda$ or computing an $\alpha$-approximation of the edge connectivity requires $\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}})$ rounds.
- Edge connectivity and vertex connectivity are two fundamental concepts in graph theory. Although by now there is a good understanding of the structure of graphs based on their edge connectivity, our knowledge in the case of vertex connectivity is much more limited. An essential tool in capturing edge connectivity are edge-disjoint spanning trees. The famous results of Tutte and Nash-Williams show that a graph with edge connectivity $\lambda$ contains $\floor{\lambda/2}$ edge-disjoint spanning trees. We present connected dominating set (CDS) partition and packing as tools that are analogous to edge-disjoint spanning trees and that help us to better grasp the structure of graphs based on their vertex connectivity. The objective of the CDS partition problem is to partition the nodes of a graph into as many connected dominating sets as possible. The CDS packing problem is the corresponding fractional relaxation, where CDSs are allowed to overlap as long as this is compensated by assigning appropriate weights. CDS partition and CDS packing can be viewed as the counterparts of the well-studied edge-disjoint spanning trees, focusing on vertex disjointedness rather than edge disjointness. We constructively show that every $k$-vertex-connected graph with $n$ nodes has a CDS packing of size $\Omega(k/\log n)$ and a CDS partition of size $\Omega(k/\log^5 n)$. We prove that the $\Omega(k/\log n)$ CDS packing bound is existentially optimal. Using CDS packing, we show that if vertices of a $k$-vertex-connected graph are independently sampled with probability $p$, then the graph induced by the sampled vertices has vertex connectivity $\tilde{\Omega}(kp^2)$. Moreover, using our $\Omega(k/\log n)$ CDS packing, we get a store-and-forward broadcast algorithm with optimal throughput in the networking model where in each round, each node can send one bounded-size message to all its neighbors.
- We study lower bounds on information dissemination in adversarial dynamic networks. Initially, k pieces of information (henceforth called tokens) are distributed among n nodes. The tokens need to be broadcast to all nodes through a synchronous network in which the topology can change arbitrarily from round to round provided that some connectivity requirements are satisfied. If the network is guaranteed to be connected in every round and each node can broadcast a single token per round to its neighbors, there is a simple token dissemination algorithm that manages to deliver all k tokens to all the nodes in O(nk) rounds. Interestingly, in a recent paper, Dutta et al. proved an almost matching Omega(n + nk/log n) lower bound for deterministic token-forwarding algorithms that are not allowed to combine, split, or change tokens in any way. In the present paper, we extend this bound in different ways. If nodes are allowed to forward b < k tokens instead of only one token in every round, a straight-forward extension of the O(nk) algorithm disseminates all k tokens in time O(nk/b). We show that for any randomized token-forwarding algorithm, Omega(n + nk/(b^2 log n log log n)) rounds are necessary. If nodes can only send a single token per round, but we are guaranteed that the network graph is c-vertex connected in every round, we show a lower bound of Omega(nk/(c log^3/2 n)), which almost matches the currently best O(nk/c) upper bound. Further, if the network is T-interval connected, a notion that captures connection stability over time, we prove that Omega(n + nk/(T^2 log n)) rounds are needed. The best known upper bound in this case manages to solve the problem in O(n + nk/T) rounds. Finally, we show that even if each node only needs to obtain a delta-fraction of all the tokens for some delta in [0,1], Omega(nk delta^3 log n) are still required.
- We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possible to find an MIS in O(log^3 n) time. Second, if we assume sleeping nodes are awoken by neighboring beeps, then we can also find an MIS in O(log^3 n) time. Third, if in addition to this wakeup assumption we allow sender-side collision detection, that is, beeping nodes can distinguish whether at least one neighboring node is beeping concurrently or not, we can find an MIS in O(log^2 n) time. Finally, if instead we endow nodes with synchronous clocks, it is also possible to find an MIS in O(log^2 n) time.
- We consider the problem of finding a maximal independent set (MIS) in the discrete beeping model. At each time, a node in the network can either beep (i.e., emit a signal) or be silent. Silent nodes can only differentiate between no neighbor beeping, or at least one neighbor beeping. This basic communication model relies only on carrier-sensing. Furthermore, we assume nothing about the underlying communication graph and allow nodes to wake up (and crash) arbitrarily. We show that if a polynomial upper bound on the size of the network n is known, then with high probability every node becomes stable in O(\log^3 n) time after it is woken up. To contrast this, we establish a polynomial lower bound when no a priori upper bound on the network size is known. This holds even in the much stronger model of local message broadcast with collision detection. Finally, if we assume nodes have access to synchronized clocks or we consider a somewhat restricted wake up, we can solve the MIS problem in O(\log^2 n) time without requiring an upper bound on the size of the network, thereby achieving the same bit complexity as Luby's MIS algorithm.
- Nov 25 2010 cs.DC arXiv:1011.5470v2The question of what can be computed, and how efficiently, are at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a \emphdistributed fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first poly-logarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.
- We study the problem of clock synchronization in highly dynamic networks, where communication links can appear or disappear at any time. The nodes in the network are equipped with hardware clocks, but the rate of the hardware clocks can vary arbitrarily within specific bounds, and the estimates that nodes can obtain about the clock values of other nodes are inherently inaccurate. Our goal in this setting is to output a logical clock at each node, such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than distant nodes. This property is called gradient clock synchronization. Gradient clock synchronization has been widely studied in the static setting, where the network topology does not change. We show that the asymptotically optimal bounds obtained for the static case also apply to our highly dynamic setting: if two nodes remain at distance d from each other for sufficiently long, it is possible to upper bound the difference between their clock values by O(d*log(D/d)), where D is the diameter of the network. This is known to be optimal for static networks, and since a static network is a special case of a dynamic network, it is optimal for dynamic networks as well. Furthermore, we show that our algorithm has optimal stabilization time: when a path of length d appears between two nodes, the time required until the clock skew between the two nodes is reduced to O(d*log(D/d)) is O(D), which we prove is optimal.
- May 17 2010 cs.DC arXiv:1005.2567v1We present the \emphdiscrete beeping communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, don't have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. We study the problem of \emphinterval coloring, a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes share no resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1$ time algorithm that terminates with probability 1 and assigns an interval of size $\Omega(T/∆)$ that repeats every $T$ time units to every node of the network. This improves an $O(\log n)$ time algorithm with the same guarantees presented in \cite{infocom09}, and accentuates the unrealistic assumptions of the continuous model. Under the more realistic discrete model, we present a Las Vegas algorithm that solves $\Omega(T/∆)$-interval coloring in $O(\log n)$ time with high probability and describe how to adapt the algorithm for dynamic networks where nodes may join or leave. For constant degree graphs we prove a lower bound of $\Omega(\log n)$ on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs.
- In this paper we suggest a method by which reference broadcast synchronization (RBS), and other methods of estimating clock values, can be incorporated in standard clock synchronization algorithms to improve synchronization quality. We advocate a logical separation of the task of estimating the clock values of other nodes in the network from the task of using these estimates to output a logical clock value. The separation is achieved by means of a virtual estimate graph, overlaid on top of the real network graph, which represents the information various nodes can obtain about each other. RBS estimates are represented in the estimate graph as edges between nodes at distance 2 from each other in the original network graph. A clock synchronization algorithm then operates on the estimate graph as though it were the original network. To illustrate the merits of this approach, we modify a recent optimal gradient clock synchronization algorithm to work in this setting. The modified algorithm transparently takes advantage of RBS estimates and any other means by which nodes can estimate each others' clock values.
- The described multicoloring problem has direct applications in the context of wireless ad hoc and sensor networks. In order to coordinate the access to the shared wireless medium, the nodes of such a network need to employ some medium access control (MAC) protocol. Typical MAC protocols control the access to the shared channel by time (TDMA), frequency (FDMA), or code division multiple access (CDMA) schemes. Many channel access schemes assign a fixed set of time slots, frequencies, or (orthogonal) codes to the nodes of a network such that nodes that interfere with each other receive disjoint sets of time slots, frequencies, or code sets. Finding a valid assignment of time slots, frequencies, or codes hence directly corresponds to computing a multicoloring of a graph $G$. The scarcity of bandwidth, energy, and computing resources in ad hoc and sensor networks, as well as the often highly dynamic nature of these networks require that the multicoloring can be computed based on as little and as local information as possible.