Formal approach used

We think of a randomized algorithm as a machine $M$ (or function) that computes $M(x, r)$, where $\vert x\vert = n$ is the input and $r$ is the sequence of random bits. The machine model is the usual RAM model, where we have a memory space that is typically polynomial in the size of the input $x$, and in constant time we can read a memory location, write a memory location, or perform arithmetic operations on integers of up to $O(\log n)$ bits. For example, if $\vert x\vert=1024$, then we represent it by $10-$bits.

In this model, we may find it easier to think of the random bits, supplied as needed by some subroutine, where generating a random integer of size $O(\log n)$ takes constant time. The justification for this assumption is that it takes constant time to read the next $O(\log n)$-sized value from the random input.

Because the number of these constant-time operations, and thus the running time for the algorithm as a whole, may depend on the random bits, it is now a random variable - a function on points in some probability space. The probability space $\Omega$ consists of all possible sequences $r$, each of which is assigned a probability $P_r[r]$ (typically $2^{\vert r\vert}$, and the running time for $M$ on some input $x$ is generally given as an expected value $E_r[time(M (x, r))]$, where for any $X$,

$\displaystyle E_r[X] = \sum_{r \in \Omega} X(r).P_r[r].$ (8.1)

We can now quote the performance of $M$ in terms of this expected value: where we would say that a deterministic algorithms runs in time $O(f(n))$, where $\vert x\vert = n$ is the size of the input, we instead say that our randomized algorithm runs in expected time $O(f(n))$, which means that $E_r [time(M (x, r))] = O(f(\vert x\vert))$ for all inputs $x$.

This is distinct from traditional worst-case analysis, where there is no expectation, and average-case analysis. The following trivial example shows the distinction.

Example 8.1   Associate random variable $X$ for flipping of a coin.

Solution. Let sample space $S=\{H, T\}$. A random variable $X_H$, for coin coming as heads, i.e., event $H$. So,

\begin{indisplay}X_H =
\begin{cases}
1, & \text{if head comes},\\
0, & \text{if tail comes}.\\
\end{cases}\end{indisplay} (8.2)

The expected number of heads obtained in flip of a coin is,

$\displaystyle E[X_H]$ $\displaystyle = 1.P_r\{H\} + 0.\{P_r\{T\}$    
  $\displaystyle = 1.\frac{1}{2} + 0.\frac{1}{2}$    
  $\displaystyle = \frac{1}{2}$    

So the expected value of random variable associated with an event $A$ is probability that event $A$ occurs. Using of expected value is useful when repeated trials are performed. For example, to compute the number of heads in $n$ coin flips by considering separately, the probability of heads, 1 heads, 2 heads, etc. Let $X_i$ is random variable associated with the event in which the $i$-th flip comes up heads. Let $X$ is the random variable denoting total number of heads in the $n$ coin flip. So,

$\displaystyle X = \sum_{i=1}^n X_i.$ (8.3)

To compute expected number of heads, we take expectation in both sides,

$\displaystyle E[X]$ $\displaystyle = E[\sum_{i=1}^n X_i]$    
  $\displaystyle = \sum_{i=1}^n E[X_i]$    
  $\displaystyle =\sum_{i=1}^n \frac{1}{2}$    
  $\displaystyle =\frac{n}{2}.$    

$\Box$