What connection is there between the
and the convergence and accuracy
of sampling algorithms?
Each samples a specific distribution.
A network of spin sites that interact viawhere the sum is over all adjacent sites (typically a lattice).
Single spin flips define a moveset.
Other possible moves: double flips,
What about graphs that follow other
degree distributions and correlations?
1D periodic chain, cycle graph,
Relative convergence times
Group the microstates into isomorphically
different arrangements of spins;
Same moves (single-spin flips), different moveset graphs.
Stars gives rise to "ladder"-type moveset graphs,
cycles are more complicated.
What can we change?
Minimize spectrum of converged WL walks.
Minimize "round-trip" time between two extermal states.
We can optimize a new move by minimizing and weighting the new move relative to the old ones.
Possible new moves, inversions, -spin flips,
bridges, and "cheats".
This changes the edges in the moveset graph.
Assume that optimized moves will carry over during
the non-Markovian phase of the algorithm.
Fix the moveset, now try to optimize the weights.
This leads to non-flat histograms.
Labels flow of walkers from extermal states,
Expand steady-state current to first order,
Assume that the weights are slowly varying in energy,
Absorbing Markov Chains
mean/variance of absorbance times
WL, Trebst, and Isochronal weights
are not necessarily optimal.
Minimize not just round-trip between
all states, not just extermal states?
Consider not just mean round-trip times,
but higher moments (e.g. variance, skew)?
Quantify the sampling difficultly
by the moveset topology?
There is room for improvement in the optimal moveset,
small systems provide insight to larger state space.
Trebst sampling is an improvment over flat histograms,
but assumes smooth DOS.
Sampling at energy macrostates is coarse,
it possibly could be improved with better macrostate fidelity.