Mcmc module

This module defines one key function, chain(), that handles the Markov chain. So far, the code uses only one chain, as no parallelization is done.

The following routine is also defined in this module, which is called at every step:

  • get_new_position() returns a new point in the parameter space, depending on the proposal density.

The chain() in turn calls several helper routines, defined in sampler. These are called just once:

Their usage is described in sampler. On the contrary, the following routines are called at every step:

The arguments of these functions will often contain data and/or cosmo. They are both initialized instances of respectively data and the cosmological class. They will thus not be described for every function.

mcmc.chain(cosmo, data, command_line)[source]

Run a Markov chain of fixed length with a Metropolis Hastings algorithm.

Main function of this module, this is the actual Markov chain procedure. After having selected a starting point in parameter space defining the first last accepted one, it will, for a given amount of steps :

  • choose randomnly a new point following the proposal density,
  • compute the cosmological observables through the cosmological module,
  • compute the value of the likelihoods of the desired experiments at this point,
  • accept/reject this point given its likelihood compared to the one of the last accepted one.

Every time the code accepts data.write_step number of points (quantity defined in the input parameter file), it will write the result to disk (flushing the buffer by forcing to exit the output file, and reopen it again.


to use the code to set a fiducial file for certain fixed parameters, you can use two solutions. The first one is to put all input 1-sigma proposal density to zero (this method still works, but is not recommended anymore). The second one consist in using the flag “-f 0”, to force a step of zero amplitude.

mcmc.get_new_position(data, eigv, U, k, Cholesky, Rotation)[source]

Obtain a new position in the parameter space from the eigen values of the inverse covariance matrix, or from the Cholesky decomposition (original idea by Anthony Lewis, in Efficient sampling of fast and slow cosmological parameters )

The three different jumping options, decided when starting a run with the flag -j are global, sequential and fast (by default) (see parser_mp for reference).


For running Planck data, the option fast is highly recommended, as it speeds up the convergence. Note that when using this option, the list of your likelihoods in your parameter file must match the ordering of your nuisance parameters (as always, they must come after the cosmological parameters, but they also must be ordered between likelihood, with, preferentially, the slowest likelihood to compute coming first).

  • global: varies all the parameters at the same time. Depending on the input covariance matrix, some degeneracy direction will be followed, otherwise every parameter will jump independently of each other.
  • sequential: varies every parameter sequentially. Works best when having no clue about the covariance matrix, or to understand which estimated sigma is wrong and slowing down the whole process.
  • fast: privileged method when running the Planck likelihood. Described in the aforementioned article, it separates slow (cosmological) and fast (nuisance) parameters.
  • eigv (numpy array) – Eigenvalues previously computed
  • U (numpy_array) – Covariance matrix.
  • k (int) – Number of points so far in the chain, is used to rotate through parameters
  • Cholesky (numpy array) – Cholesky decomposition of the covariance matrix, and its inverse
  • Rotation (numpy_array) – Not used yet