Communication Complexity 16:198:671
3/2/10
Lecture 7
Lecturer: Troy Lee
1
Scribe: Luke Friedman
Streaming Algorithms
Let S ∈ [n]m . (i.e. S is a string of length m with elements in the range 1 to n). Suppose we
want to compute f (S) for some function f , but m is huge, so that it is impractical to try to
store all of m. In a streaming algorithm we assume that we see the input S one symbol at a
time with no knowledge of what future symbols will be. The question then is how many bits
must be kept in memory in order to successfully compute f (S). If C is the number of bits of
memory used then in general we are looking for algorithms for which C ≈ O(log n + log m).
(Note that C includes all memory that is used during the computation of the algorithm, not
just information that is stored about S)
Often we consider randomized streaming algorithms that compute approximations to
f (S). In this case, the algorithm correctly computes a c-approximation if whp it compute a
value in the range [ 1c f (S), cf (s)].
Example 1.
Let S ∈ [n]n−1 , and assume that S contains every element in [n] except for the number
x.
P
Let f (S) = x. A streaming algorithm to compute f (S) can maintain a sum Tk = ki=1 Si .
At the end of the stream, it outputs f (S) = n(n+1)
− Tn−1 . This algorithm uses O(log n)
2
memory.
1.1
Computing Frequency Moments
The results from this section and the next are from the Gödel prize winning paper “The Space
Complexity of Approximating the Frequency Moments” by Alon, Matias, and Szegedy ’96.
Let S ∈ [n]m and let
Mi = |{j ∈ [m] : Sj = i}|
Then the kth frequency moment, denoted Fk is
Fk =
n
X
Mik
i=1
So for example F0 equals the number of distinct elements in the stream and F1 equals the
length of the stream. Also, by definition, F∞ equals the number of occurrences of the most
frequent element in the stream.
We can lower bound the memory needed to compute F∞ by reducing the problem to the
communication complexity of computing disjointness:
1
Claim 2. If there exists a streaming algorithm to compute F∞ using C bits of memory, then
there exists a protocol to compute disjointness using C bits of communication.
Proof. Suppose there exists a streaming algorithm A that computes F∞ using C bits of
memory.
Let (x, y) be an input to DISJ, where Alice gets x, Bob gets y, and x, y ∈ {0, 1}n .
Alice converts her input into a stream ax = {i : xi = 1}. Bob similarly converts his input
into a stream by = {i : yi = 1}. Alice then runs A on ax and sends C bits of information to
Bob representing the state of the algorithm after it has reached the end of ax . Bob can then
finish running A on by to compute F∞ (S), where S = ax ◦ by .
If DISJ(x, y) = 1 then F∞ (S) = 1 and otherwise F∞ (S) = 2. Therefore Alice and Bob
are able to solve DISJ using C bits of communication.
Because we have a lower bound of n for the communication complexity of the disjointness
problem, as a corollary we get a lower bound of n on the amount of memory needed to
compute F∞ . Also, since DISJ requires Ω(n) bits of communication even for randomized
protocols, the result applies to randomized streaming algorithm as well. Furthermore, the
gap between 1 and 2 of the possible values of F∞ (S) means that the result can applied in
the approximation case as well.
1.2
Multiparty Number in the Hand Model and Frequency Moments
We can generalize the normal communication complexity model to allow more than two
players. In the Multiparty Number in the Hand Model there are p players, and the ith
player gets an input xi . Every player is able to see only his own input. We imagine there is
a blackboard, and when a player communicates he writes his message on the blackboard and
all players can see it. The communication cost of computing a function f (x1 , x2 , . . . , xp ) is
the total number of bits written on the blackboard during the worst case run of an optimal
protocol.
We will consider a certain promise problem unique disjointness UDISJp in this model.
The problem is only defined for inputs x1 , . . . , xp which either have a unique intersection, in
which case the value is 0, or if the inputs are pairwise disjoint, in which case the value is 1.
It is possible using information theoretic techniques as discussed in the previous lecture
to give strong randomized lower bounds for this problem. In particular, Gronemeier [Gro09]
has recently shown (improving previous bounds [AMS99, BYJKS04, CKS03]) that
n
R(UDISJp ) = Ω
p
We can derive lower bounds for the problem of computing the kth frequency moment of
a stream by reducing that problem to UDISJp where p = n1/k .
Claim 3. If there exists a streaming algorithm to compute Fk using C bits of memory, then
there exists a C · n1/k bit n1/k party protocol for U DISJn1/k .
2
Proof. The proof is similar to the reduction from F∞ . Fix k and let p = n1/k . Suppose there
exists a streaming algorithm A to compute Fk using C bits of memory. On input (x1 , . . . xp )
to UDISJp , player i converts his input xi into a stream axi = {i : the ith bit of xi equals 1}.
We assume that the total number of ones in all the strings x1 , . . . , xp is n. The players then
together run A on the stream S = ax1 ◦ . . . ◦ axp . Whenever player i finishes running A on
the portion of S corresponding to axi , he writes the state of the algorithm on the blackboard
using C bits.
At the end of the algorithm the players have computed Fk (S) using Cn1/k bits of communication. This is sufficient to determine UDISJp (x1 , . . . , xp ) since if UDISJp (x1 , . . . , xp ) = 1
then Fk (S) = n, and if UDISJp (x1 , . . . , xp ) = 0 then Fk (S) ≥ n − 1 + n = 2n − 1.
As a corollary we get that computing Fk requires space Ωn1−2/k .
1.3
An algorithm for computing F0
We now give an efficient randomized streaming algorithm for approximating F0 from [AMS99].
(This is not related to communication complexity but is a nice example of a nontrivial streaming algorithm).
Let S = [n]m be the stream and let d be some integer such that 2d ≥ n. We will view
each element of the string Si as an element of GF (2d ) The streaming algorithm works as
follows:
1 MAX-R ← 0
2 Randomly choose a, b ∈ GF (2d )
3 foreach element Si ∈ S
4
do
5
Zi ← aSi + b
6
ri ← max r such that the r rightmost bits of Zi are all 0
7
if ri > MAX-R
8
then MAX-R ← ri
9
return 2MAX-R
During its execution the algorithm only needs to keep track of a, b and M AX-R, each of
which takes O(log n) bits, so the algorithm uses O(log n) bits.
Now we show that the algorithm is correct with high probability.
Claim 4.
Pr[2MAX-R ∈ [F0 /c, c · F0 ]] ≥ 1 − 2/c
Proof. Fix an r, and for each x that appears in the stream S let Wx,r be the indicator random
variable that is 1 if the ri in line 6 of the algorithm when Si = x is at least r and 0 otherwise.
Because the variable Zi is uniformly distributed over elements of GF (2d ), E[Wx,r ] = 1/2r
for any x. Let X be the set of all elements x ∈ [n] that appear in the stream. Define Yr as
X
Yr =
Wx,r
x∈X
3
By linearity of expectation we have that E[Yr ] = F0 /2r .
Suppose that 2r > c · F0 . In this case we have that E[Yr ] < 1/c. Because Yr is a
nonnegative integral random variable, this implies that P r[Yr > 0] < 1/c.
On the other hand, suppose 2r < F0 /c. Because the Wx,r are pairwise independent, we
have that
1
1
F0
Var(Yr ) = F0 r (1 − r ) < r = E[Yr ]
2
2
2
Therefore, by Chebyshev’s Inequality,
P r[Yr = 0] ≤
2r
1
1
Var(Yr )
=
<
<
E[Yr ]2
E[Yr ]
F0
c
By a union bound we therefore get that
P r[2MAX-R 6∈ [F0 /c, c · F0 ]] ≤ 2/c
which proves the claim.
2
Lopsided Set Disjointness
We will now prove communication complexity lower bounds for the Lopsided Set Disjointness
problem, which turns out to have many applications to data structure problems. The bound
originally was proved in [MNSW98]. Here we follow the presentation of Pǎtraşcu given in
his blog post http://infoweekly.blogspot.com/2008/05/being-rich-i_23.html.
The problem is similar to Disjointness: Alice is given an input x ∈ {0, 1}u , Bob is given
an input y ∈ {0, 1}u , and the goal is for the players to determine whether there exists an i
such that xi = yi = 1. However, we also include the additional guarantee that |x|w = n and
|y|w = u/2, with n u. (Here |x|w denotes the hamming weight of a binary string—the
number of ones that appear in the string.)
Let a be the number of bits that Alice sends during a protocol for Lopsided Set Disjointness, and b be the number of bits that Bob sends. Trivially the players can solve the
problem if Alice sends Bob a = log nu = O(n log(u/n)) bits specifying her input or if Bob
sends Alice b = u bits specifying his input. Therefore in some sense a bit sent by Alice
carries more information, and it makes sense to measure the cost of a protocol in terms of a
and b separately and to study the tradeoffs between these two quantities.
For instance, we can show an upper bound on the size of b in terms of the size of a as
follows: Let k be a parameter and imagine the players divide their inputs into k equal blocks
of size u/k. Alice first sends
Bob the name of every one of her blocks that contains at least
one 1. This takes log( nk ) bits. Next Bob sends the values of his input for every one of these
blocks, which takes uk · n bits. Optimizing over the parameter k, we get that
b≤
u
2a/n
4
For our lower bound we will show that this tradeoff is essentially optimal: We are able
to show that
u
b ≥ O(a/n) .
2
2.1
Proof
Call a communication matrix [U, V ] rich if there are V columns each with at least U 1’s in
them. We consider rows labeled by inputs to Alice and columns labeled by inputs to Bob.
Claim 5. Let M be a [U, V ] rich communication matrix and suppose there exists a correct
V
] all one submatrix
(a, b) protocol for the communication problem. Then there exists a [ 2Ua , 2a+b
in M .
Proof. The proof is by induction on a and b. Originally by assumption we have that M itself
is a [U, V ] rich rectangle.
0
0
Now suppose the protocol has entered a submatrix R of M which is [U , V ] rich, and
0
00
that Bob sends a bit to Alice. This splits R into two subrectangles R and R , one of which
0
00
contains at least half of the columns from R. Therefore either R or R will be [U, V /2] rich.
Now consider the case that Alice sends a bit to Bob at this step. This will again split R
0
00
0
into two subrectangles R and R . In this case there are V columns that contain at least
0
0
00
0
00
U /2 1’s in them, half of which have to be in either R or R , so either R or R will be
0
0
[U /2, V /2] rich.
V
] rich rectangle.
Therefore at the end of the (a, b) protocol there will exist a [ 2Ua , 2a+b
Because this rectangle has nonzero richness and therefore has at least one 1, and because all
rectangles at the end of the protocol must be monochromatic, this implies the conclusion of
the claim.
Now a particular column of interest in the communication matrix M for Lopsided Set
Disjointness is a string y with
u/2 1’s in it that represents a possible input of Bob. For each
u/2
such string, there are n possible inputs of Alice that have n 1’s and are disjoint from y.
u
Therefore M is [ u/2
, u/2 ] rich. By the claim, if there is an (a, b) protocol to solve Lopsided
n
Set Disjointness it must produce a monochromatic 1 rectangle of size
u
u/2
n
2a
×
u/2
2a+b
Suppose this monochromatic 1 rectangle is defined by {x1 , x2 , . . .}×{y1 , y2 , . . .}. Because
the rectangle is monochromatic 1, every xi is disjoint from every yj . Therefore if we define
X = ∪xi and Y = ∪yi , (where xi ∪ xj is the string that has a 1 in every bit position for which
either xi or xj has a 1), then
|X|w + |Y |w ≤ u
5
(1)
Also, because every row has n1’s from X and every column has u/2 1’s from Y , we know
that the size of the rectangle is at most
|X|w
|Y |w
×
n
u/2
From this we get that
u/2
|X|w
n
≥ a
2
n
u
|Y |w
u/2
≥ a+b
u/2
2
(2)
(3)
Next we use the following binomial inequality: when B ≤ A then
A
C
B
C
≥ (A/B)C
Using the inequality, we get from (2) that
u
2|X|w
n
And thus
|X|w >
< 2a
u
2O(a/n)
From (1) we have that
|Y |w ≤ u − |X|w ≤ u(1 − 2−O(a/n) )
Finally, using the binomial inequality on (3) we get that
u
|Y |w
u/2
≤ 2a+b
1 + 2−O(a/n)
eu/2
b≥
O(a/n)
Θ(u)
≤ 2a+b
≤ 2a+b
u
2O(a/n)
Here the second line uses the inequality 1/(1 − ) > 1 + and the third line uses the estimate
(1 + 1/A)B = eΘ(B/A) .
6
References
[AMS99]
N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the
frequency moments. Journal of Computer and System Sciences, 58(1):137–147,
1999.
[BYJKS04] Z. Bar-Yossef, T. Jayram, R. Kumar, and D. Sivakumar. Information statistics
approach to data stream and communication complexity. Journal of Computer
and System Sciences, 68(4):702–732, 2004.
[CKS03]
A. Chakrabarti, S. Khot, and X. Sun. Near-optimal lower bounds on the multiparty communication complexity of set-disjointness. In Proceedings of the 18th
IEEE Conference on Computational Complexity. IEEE, 2003.
[Gro09]
A. Gronemeier. Asymptotically optimal lower bounds on the NIH multiparty
information compleixty of the AND function and disjointness. In Proceedings of
the 26th Symposium on Theoretical Aspects of Computer Science, pages 505–516,
2009.
[MNSW98] P. Miltersen, N. Nisan, S. Safra, and A. Wigderson. On data structures and
asymmetric communication complexity. Journal of Computer and System Sciences, 57(1):37–49, 1998.
7