Department

Computer Science

Document Type

Article

Publication Date

1999

Abstract

In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. For example, given any "dataflow" type of algorithm that runs in T steps on an n-node ring with unit link delays, we show how to run the algorithm in O(T) steps on any n-node bounded-degree connected network with average link delay O(1). This is a significant improvement over prior approaches to latency hiding, which require slowdowns proportional to the maximum link delay. In the case when the network has average link delay dave, our simulation runs in O(√daveT) steps using n/√dave processors, thereby preserving efficiency. We also show how to efficiently simulate an n × n array with unit link delays using slowdown Õ (d&frac23ave) on a two-dimensional array with average link delay dave. Last, we present results for the case in which large local databases are involved in the computation.

Share

COinS