next up previous
Next: A Conventional MC Study Up: Densities of States: The Previous: Densities of States: The

The Algorithm

The Wang-Landau algorithm is a method which converges upon $\Omega (E)$ to within a given tolerance. First, we must understand what energy levels are accessible to our system; let us call this set $\left\{E_i: i=1,\dots,N_e\right\}$, where $N_e$ is the number of levels. Let us imagine conducting a random walk in energy space such that the observed histogram of visited levels, $H(E)$, is uniform, or ``flat.'' This random walk must be guided by some underlying probability distribution, $\mathscr{N}$, which tells us whether to accept or reject a proposed step, $o\rightarrow n$, in the walk. Generally, $\mathscr{N}$ is incorporated in a Metropolis acceptance rule:

\begin{displaymath}
{\rm acc}(o\rightarrow n) = \min\left(1,\frac{\mathscr{N}(n)}{\mathscr{N}(o)}\right)
\end{displaymath} (344)

By conditionally accepting trials based on Eq. 345, we guarantee that $\mathscr{N}$ is a static equilibrium distribution (assuming our trial moves obey detailed balance, or the balance condition is met in some other way). Now, if $\mathscr{N}$ is the canonical distribution, the probability of observing a particular energy level, $E_i$, is proportional to $\exp\left(-E_i/k_BT\right)$.

Now consider that each energy level has $\Omega(E_i)$ allowable microstates, each with the same probability, $1/\Omega(E_i)$, by the fundamental hypothesis. The probability of observing the system in energy level $E_i$ is the composite probability of observing it in one of the microstates with energy $E_i$. If we want the probability of observing the system in energy level $E_i$ to be uniform, then the probability of observing any particular microstate should be proportional to $1/\Omega(E_i)$. So, the probability distribution we should use is

\begin{displaymath}
\mathscr{N} \propto 1/\Omega(E_i),
\end{displaymath} (345)

and the acceptance rule becomes:
\begin{displaymath}
{\rm acc}(o\rightarrow n) = \min\left(1,\frac{\Omega(o)}{\Omega(n)}\right)
\end{displaymath} (346)

Now, here is why we want to construct a flat histogram. If we can, then we know that have the ``correct'' $\Omega$. But we don't know $\Omega$ to begin with. So now we need a way to begin with a guess for $\Omega$ and then systematically refine it until we achieve a flat histogram in energy space. An efficient technique for successively refining a guess for $\Omega$ from a sequence of Monte-Carlo simulations was the major contribution of Wang and Landau. [22,23]

In this algorithm, we start with the initial guess $\Omega(E) = 1$, and the histogram is initialized as $H(E) = 0$. Then, we begin making trial moves and accepting or rejecting based on the acceptance rule in Eq. 347. After a move, we find ourselves in energy state $E_i$, and we update both the histogram and the density of states:

$\displaystyle H(E_i)$ $\textstyle \rightarrow$ $\displaystyle H(E_i) + 1$ (347)
$\displaystyle \Omega(E_i)$ $\textstyle \rightarrow$ $\displaystyle f\Omega(E_i)$ (348)

Note that this is done regardless of whether we wind up in state $E_i$ due to an accepted move or a rejected move. The density of states update factor, $f$, begins at a relatively large value ( $f =
e^1 = 2.718281828$). The updates continue until the histogram is ``sufficiently flat.'' At this point, the histogram is reset to 0, and $f$ is reduced according to
\begin{displaymath}
f \rightarrow \sqrt{f}
\end{displaymath} (349)

Then, the simulation continues again until the histogram is sufficiently flat. After several iterations, $f$ gets closer and closer to 1, and changes to $\Omega$ become smaller and smaller. The series of runs is terminated after $\ln f$ is about 10$^{-8}$.

By ``sufficiently flat,'' Wang and Landau suggest that the histogram is such that the bin with the minimum number of hits is within 95% of the mean number of hits per bin. This is a particularly strong criterion, and more recent work suggests that it is adequate to require merely that each bin has a minimum number of hits, regardless of the mean value [26].

The factor update relation (Eq. 350) is also somewhat arbitrary. What is required is that $f$ approach unity in a scheduled manner. A more general rule might be

\begin{displaymath}
f \rightarrow \sqrt[n]{f}    n \ge 2
\end{displaymath} (350)

It is important to note that, because the acceptance rule is changing during the simulation, detailed balance is not strictly obeyed. However, it is very close to being obeyed when $f\approx 1$. How does one choose a proper initial value and update scheme for $f$? There is unfortunately no general guidance on this in the current literature, other than the original realization of Wang and Landau that too large a value of $f$ introduces large statistical fluctuations in $\Omega$ which might become ``frozen in,'' while too small a value for $f$ will require much too many visits to produce a flat histogram.

Also, the magnitude of $\Omega$ is normally such that we can't tabulate it directly; it is advantageous instead to tabulate its log. So the array used to store $\Omega$ actually stores $\ln\Omega$, and the update of a bin becomes:

\begin{displaymath}
\ln\Omega \rightarrow \ln\Omega + \ln f
\end{displaymath} (351)

The acceptance rule does not change, of course; it is simply reformulated as
\begin{displaymath}
{\rm acc}(o\rightarrow n) = \min\left[1,\exp\left(\ln\Omega(o)-\ln\Omega(n)\right)\right]
\end{displaymath} (352)

Finally, because the acceptance rule involves a ratio of probabilities, $\Omega$ is not determined absolutely, but rather to within a constant factor. Unless we know the exact $\Omega$ for some reference state (as we do for certain lattice models, such at the Ising magnet and the Potts glass), we cannot obtain absolute thermal quantities (entropy and free energies) from the $\Omega$ obtained using this method. This is usually not a problem, either because we can ``pin down'' $\Omega$ at a known reference, or because we don't care about absolutes, but only differences.


next up previous
Next: A Conventional MC Study Up: Densities of States: The Previous: Densities of States: The
cfa22@drexel.edu