Abstract
Background
In recent years, stochastic descriptions of biochemical reactions based on the Master Equation (ME) have become widespread. These are especially relevant for models involving gene regulation. Gillespie’s Stochastic Simulation Algorithm (SSA) is the most widely used method for the numerical evaluation of these models. The SSA produces exact samples from the distribution of the ME for finite times. However, if the stationary distribution is of interest, the SSA provides no information about convergence or how long the algorithm needs to be run to sample from the stationary distribution with given accuracy.
Results
We present a proof and numerical characterization of a Perfect Sampling algorithm for the ME of networks of biochemical reactions prevalent in gene regulation and enzymatic catalysis. Our algorithm combines the SSA with Dominated Coupling From The Past (DCFTP) techniques to provide guaranteed sampling from the stationary distribution. The resulting DCFTPSSA is applicable to networks of reactions with unimolecular stoichiometries and sublinear, (anti) monotone propensity functions. We showcase its applicability studying steadystate properties of stochastic regulatory networks of relevance in synthetic and systems biology.
Conclusion
The DCFTPSSA provides an extension to Gillespie’s SSA with guaranteed sampling from the stationary solution of the ME for a broad class of stochastic biochemical networks.
Background
Recent experiments on gene and enzyme activity at single cell resolution have revealed the inherent randomness of key cellular processes linked to gene expression [13]. The experiments show that populations with identical genotypes present heterogeneous phenotypes and that noise at the molecular level, due to low copy numbers, contributes to population diversity. For mathematical models to capture this variability, a stochastic description is required.
Stochastic models in Computational Biology are usually based on the Master Equation (ME) of the chemical reaction kinetics [46]. Formally, the ME is a differential form of the ChapmanKolmogorov equation, which gives the time evolution of P(x, t), the probability of the state of the system x. Only a handful of analytical solutions of the ME have been found and one must usually resort to approximations or numerical solutions. The most popular numerical procedure is Gillespie's Stochastic Simulation Algorithm (SSA) [7,8], a kinetic Monte Carlo algorithm that provides exact stochastic realizations of the underlying system of reactions. Each run of the SSA produces a time trace for the system; a collection of independent runs can be used to obtain convergent statistics of the timedependent solution of the ME. In many situations, one is interested in the steady state properties of the system, i.e., in the stationary distribution of the ME, π. Although in principle π could be obtained as the first left eigenvector of the transition matrix, this computation is infeasible for most problems of interest due to the combinatorial explosion of the state space [9]. To circumvent this problem, it has become customary to sample π by running the SSA for a 'very long time', convincing oneself through different heuristics that stationarity has been attained. However, the SSA does not provide guarantees or information about how long the algorithm must run to converge to π. In recent years there has been an increased interest in finding algorithms which can address the issue of sampling from stationarity, e.g., a strategy based on forward flux sampling [10].
In a seminal paper in the field of Markov Chain Monte Carlo, Propp and Wilson introduced the idea of Coupling From The Past (CFTP), an ingenious procedure that provides guaranteed sampling from the stationary distribution of a Markov chain by running coupled chains from all possible initial conditions from the past [11]. Algorithms that guarantee sampling from the stationary distribution of a Markov chain are referred to as Perfect Sampling algorithms [1114]. Recently [15], we introduced a Perfect Sampling algorithm for the SSA of biochemical networks based on Kendall's Dominated CFTP (DCFTP) [13]. This paper expands on our previous work by providing an explicit implementation of the algorithm together with a mathematical proof of its applicability to a class of reactions prevalent in models of gene regulation. We also study its numerical properties through a series of expanded examples drawn from Systems and Synthetic Biology.
Preliminaries and definitions
Dominated Coupling From The Past (DCFTP)
We give here a brief introduction to the CFTP framework (see [1113] for full proofs).
The central idea behind CFTP is to find a time in the past such that the whole state space is mapped to the same state at the present, for a given sequence of random numbers. When that occurs, the state at the present can be considered to be a sample of the stationary distribution. More formally, consider a Markov process defined by the transition rule X_{t+1 }= ϕ(X_{t}), where X_{t }≡ x(t) is shorthand for the state of the system at time t. Any Markov chain , started from t = ∞ will have reached stationarity at time t = T. If a chain with an unknown value X_{T }is continued to run until t = 0, it will attain a value X_{0 }= ϕ^{T }(X_{T}), which also comes from the stationary distribution. The CFTP algorithm searches for a time T such that the composite function ϕ^{T }(X_{T}) has a unique image for all arguments X_{T}. This implies that the chain started at T is equivalent to a chain started from t = ∞, since it will reach the same state X_{0 }regardless of its value at t = T. Hence the sample X_{0 }comes from the stationary distribution. Starting from the past and running into the present might seem counterintuitive and unnecessarily complicated. However, it is key for the algorithm to work and it can be shown that starting at t = 0 and coupling into the future will not guarantee that the samples are unbiased.
For large state spaces it is infeasible to monitor all initial conditions at time T. However, this can be done efficiently if one can find a partial ordering over the state space that is preserved by the transition rule [12]:
where denotes the partial order, i.e., a binary relation which is reflexive, antisymmetric and transitive, although it does not necessarily satisfy total comparability. Under these conditions, the whole state space can be monitored by checking for the coalescence of coupled Markov chains started at the upper and lower extremes of the state space [11,16].
Two Markov chains are said to be coupled if they use the same sequence of random numbers and the same transition rule but are started from different initial conditions. Coupled chains that meet at a time T_{c }are said to coalesce and will have identical states for t > T_{c}. A necessary (but not sufficient) condition for the preservation of the partial ordering is that the transition function is either monotone or antimonotone:
for coupled chains X and Y. If the partial order is preserved, we can monitor only the paths started at the 'extremes' of the state space, since all the paths in between remain bounded by them. We therefore define upper and a lower coupled Markov chains that enclose all other paths:
The preservation of the partial order implies two important properties for coupled chains:
Sandwiching: all paths started between L and U will have coalesced by the time L and U do,
Funneling: all paths will get closer if they are started further back into the past,
If the state space is unbounded from above, we need to use Kendall's DCFTP construction. DCFTP works by introducing a timeevolving dominating process D with known stationary distribution, which provides a random upper bound to the state space. The original process X can then be generated as an adapted functional of the dominating process and a mark process M:
The mark process generates a uniform random number each time D is changed. These marks are used to update the original process X according to the adapted functional (3) in a process that is equivalent to the direct simulation of X [12]. Heuristically, the DCFTP scheme works as follows. Since the dominating process is started from the stationary distribution at t = T, is equivalent to a process started from t = ∞. By the funneling property, all chains from the original process started from t = ∞ will be beneath the dominating process: . If we set U_{T }= D_{T }and L_{T }= and check that these two extreme paths coalesce, then all chains started from t = T map to the same state at t = 0, due to the sandwiching property. It then follows that is equivalent to and the sample comes from the stationary distribution of X, due to the equivalence of the adapted functional and the original process. Note that if D can be chosen to be a constant process equal to the maximal element of the state space, we obtain the CFTP algorithm [13].
These results are summarized in the following theorem for general DCFTP algorithms [12,13]:
Theorem 1 (DCFTP) Consider a stationary dominating process D, for which is an ergodic atom, and an associated random mark process M. Suppose that the processes are produced from D and M according to the adapted functional (3) so that the sandwiching and funneling properties (1)–(2) are fulfilled. Suppose further that X converges weakly to an invariant distribution π as t → ∞. Then L and U will coalesce almost surely in finite time and, if coalescence is achieved, L_{0 }= U_{0 }is a sample from the stationary distribution π.
Proof See [13].
Stochastic Simulation Algorithm (SSA)
This section presents briefly the classic Gillespie algorithm (SSA) for the exact simulation of the Master Equation of chemical reaction networks [7].
Definition 2 (Chemical reaction network) A system of chemical reactions is fully specified by the tuple , where = {S_{1},...,S_{m}} is a set of m different molecular species interacting through r reaction channels = {R_{1},...,R_{r}}. Each reaction R_{i }is described by a stoichiometry vector ν_{i}, which gives the change in the number of molecules of all species when reaction R_{i }occurs, and a propensity function Φ_{i}(x), which gives the statedependent probability that reaction R_{i }occurs. The state of the system is given by X_{t }≡ x(t) = (x_{1}(t),...,x_{m}(t)) ∈ ℕ^{m}, where each component x_{i}(t) indicates the number of molecules of S_{i }at time t.
Under the assumption that the molecules are confined to a wellstirred volume and held at constant temperature, we can formulate a ME governing the evolution of the system [7]:
The ME is a conservation equation for the probability distribution and the right hand side accounts for the rate of change of the probability of finding the system in state x.
A general procedure to obtain exact realizations of Markov processes first suggested by Doob [17] was applied to chemical reactions by Gillespie in his celebrated Stochastic Simulation Algorithm [7]:
Algorithm 3 (SSA) Given a chemical reaction network , as in Definition 2, with initial state and stopping time T_{s}:
k ← 0
loop
k ← k + 1
for i = 1 to r do
end for
t_{k }← t_{k1 } (1/θ_{r}) log V_{k}
if t_{k }> T_{s }then
else
end if
end loop
A run of the SSA uses the uniform random numbers V, V' to generate a random sequence of reactions ℜ = , taking place at the random transition times {t_{1},...,t_{n}} such that t_{n }<T_{s }<t_{n+1}. The path is an exact stochastic realization of Eq. (4). Note that the sequence of reactions ℜ uniquely determines . For convenience, we have committed a slight of abuse of notation when using real valued indices to denote the state and reaction taking place at time t_{k}.
Henceforth, we represent compactly the SSA Markov process implemented by Algorithm 3 as:
For an arbitrary initial state , repeated runs of the SSA will produce convergent estimates (in the Monte Carlo sense) of the distribution P(x, t, t_{0}), ∀t ∈ [t_{0}, T_{s}] [8]. However, if one is interested in the stationary distribution π, running the SSA repeatedly from different initial conditions for a finite time T_{s }does not guarantee that P(x, T_{s}) will converge to π, unless the starting points are themselves drawn from π. Our Perfect Sampling algorithm addresses this issue.
Proof of the DCFTPSSA for a class of networks of biochemical reactions
Viewing the SSA as the Markov process described by (5), we have developed a specific DCFTP algorithm that provides guaranteed sampling from the stationary distribution of the corresponding chemical ME [15]. We now provide a rigorous proof and an explicit implementation of the DCFTPSSA for an important class of biochemical reactions relevant in gene regulation.
Partial ordering
We use the Pareto dominance relation, frequently used in economics, which is defined componentwise:
Lemma 4 (Partial order) Given x, y ∈ ℕ^{m}, the relation if x_{i }≥ y_{i}, ∀i is a partial order.
Proof The proof follows trivially from the properties of natural numbers:
Reflexivity: ∀x_{i }∈ ℕ, x_{i }≥ x_{i}, whence
Antisymmetry: ∀x_{i}, y_{i }∈ ℕ, if x_{i }≥ y_{i }and y_{i }≥ x_{i }then x_{i }= y_{i}. This means that and implies x = y
Transitivity: ∀x_{i}, y_{i}, z_{i }∈ ℕ, if x_{i }≥ y_{i }and y_{i }≥ z_{i }then x_{i }≥ z_{i}. And the same property applies to the vectors: and implies . □
Assumptions on the reaction network
Consider a system of chemical reactions as given by Definition 2 with state vector x(t) ∈ ℕ^{m}. To guarantee the preservation of the Pareto partial order under the SSA Markov process (5), we restrict ourselves to a class of chemical networks with the following properties:
(a) all reactions are unimolecular birthdeath processes with nonzero propensities, i.e., each reaction R_{i }will only modify one species S_{j }by adding or subtracting one molecule. The reactions can be divided into two subsets:
(b) the system must be chemically reversible, i.e., every reaction must be reversible leading to an irreducible Markov process
(c) all death reactions must be linear, i.e.
(d) all birth reactions must have (anti)monotonic, sublinear propensity functions, i.e., ∀i, j, ∀x: ∂Φ_{i}(x)/∂x_{j }does not change sign and Φ_{i }can be bounded by a linear function (or a constant).
As shown below, the last two assumptions are related to domination by a linear network which is required to have a stationary distribution.
Although assumptions (a) – (d) might appear restrictive, the specified class of reactions is generic and encompasses the standard equations used in the modelling of genetic and regulatory networks, the cellular circuits where stochasticity is most significant. Note that assumption (c) is not unrealistic for models of gene regulatory networks, in which linear death terms due to the cellular environment are prevalent. Birth reactions in these models are usually represented through nonlinear, unimolecular (compound) rate laws that appear from quasi steadystate approximations. These functional forms have been shown to work well in the stochastic setting [18]. Our own simulations confirmed that they provide a good approximation in a wide range of parameters (results not shown). These compound rate laws are the key components that encode the positive and negative feedback in gene regulation. Classic examples are the sigmoid functions:
which are sublinear, (anti)monotonic functions.
Dominating process and adapted functionals
As stated above, assumption (d) is related to domination. In general, the state space of chemical reaction networks is unbounded from above; hence we must use the DCFTP construction, which requires a dominating process D with known stationary distribution. Fortunately, it has been shown that any network of linear first order reactions has a stationary distribution which is multivariate Poisson [19]. Moreover, it can be shown that is an ergodic atom for the multivariate Poisson, as assumed in Theorem 1 [13]. It then follows that a dominating process for any reaction network composed of unimolecular, sublinear, (anti)monotonic birthdeath processes, as defined above, can be found by 'linearizing' the original network; that is, by constructing a linearized version of this network , with the same reactions and compounds but with linear propensities (x) ≥ Φ_{i}(x), ∀x, ∀i that bound the original Φ from above. Under conditions of stability, the ME of will have a stationary distribution , given by a multivariate Poisson that can be obtained by solving a system of linear equations [19]. The existence of the stationary distribution of the dominating linear network guarantees the existence of the stationary distribution for the original network of reversible, unimolecular nonlinear reactions .
The dominating process D is defined as the stationary SSA process (5) of the linearized network with initial state sampled from :
with the sequence of reactions .
It has been shown [13] that a correct realization of the original (nonlinear) SSA process X for a network with monotonic propensities can also be obtained through an adapted functional defined in terms of the dominating process D and a random mark process where ~ U(0, 1):
The update rule for uses the ratio of the monotonic propensity functions of the original and dominating processes as follows:
where and , , correspond to reaction in the reaction sequence .
The necessary ingredient for the DCFTP is the construction of an orderpreserving Markov process for the evolution of two chains X and Y coupled to the dominating process D. For our network , this process is defined as:
with transition rule:
where the componentwise transition rule is given in Eq. (9). The transition rule incorporates the crossover scheme in which the processes X and Y use the state of each other when determining their update, as introduced by Häggström and Nelander to deal with antimonotonic processes [20].
Proof
We now show that the partial ordering defined in Lemma 4 is indeed preserved under the evolution given by Eqs. (10)–(11) for the class of reactions specified above.
Lemma 5 (Preservation of partial ordering) Consider a chemically reversible reaction network of unimolecular, sublinear, (anti)monotone birthdeath reactions and its associated SSA dominating process D, obtained from the linearized network , with the sequence of events . Consider two coupled chains X and Y evolving under (10)–(11), where is a sequence of random marks. Then , ∀s > t.
Proof Assume throughout. First consider the case when is monotonic. Then the possible outcomes for t_{0 }<s <t_{1 }are:
Outcome (i) means that neither X nor Y is modified and the preservation of the partial order is obvious. For (iii), both are modified by the same amount and the order is preserved. The interesting case is (ii) for which X is modified but not Y. If , then which also implies order preservation. However, if , then it is possible for the two chains to coalesce if . Note that since, by unimolecularity, only unit changes of the states are allowed, it is impossible for two paths to cross.
When is antimonotone, the outcomes are:
As above, outcomes (iv) and (vi) lead to no change in relative order. For (v), again we update X but not Y due to the crossover scheme. As for the monotone case, if this leads to order preservation, while if it is possible for the two chains to coalesce.
It thus follows that X and Y maintain their partial ordering through every update of the (anti)monotone process. The proof for s > t_{1 }follows by induction. □
Note that the dominated processes given by Eqs. (9) and (11) become identical when X = Y. Therefore, after coalescence the dominated process is statistically identical to the original SSA process. Since we have found a dominating process and an adapted functional, we can use Theorem 1 to obtain:
Theorem 6 (DCFTPSSA) Under the assumptions of Lemma 5, Theorem 1 is fulfilled and the DCFTPSSA described in Algorithm 7 will produce a sample from the stationary distribution of the original process X with a coalescence time which will be finite almost surely.
Proof The sandwiching (1) and funneling (2) properties follow from the preservation of partial ordering (Lemma 5) [12]. The remainder can be adapted from the general Theorem 1. □
Algorithm
A brief outline of the DCFTPSSA is as follows:
Algorithm 7 (DCFTPSSA) Given a reversible system of unimolecular birthdeath chemical reactions with (anti)monotone, sublinear propensity functions, obtain its linearized version with multivariate Poisson stationary distribution :
T ← 1
loop
if U_{0 }= L_{0 }then
return L_{0}
end if
T ← 2T
end loop
The function Extend(, T/2) runs Algorithm 3 for the linearized network and appends the path to the path . Similarly, the function GenerateMarks appends paths generated from a uniform distribution to extend the mark process. Both the marks and the forward dominating path are then reversed in time by the function Reverse. Extending these processes backwards in time in this manner is justified because of their stationarity and reversibility, which allows us to reverse the processes and translate them in time [9]. Finally, the Evolve function starts the coupled upper and lower chains from L_{T }= and U_{T }= D_{T }and evolves them forward as described by Eq. (11). Note that the assumption of reversibility of the network ensures that the reverse process will be forwardevolvable. Our requirement that propensities are nonzero also ensures that reactions are not eliminated from the network. If this were to happen, it would effectively make the system irreversible. If L and U have not coalesced at t = 0, D and M are extended further back in time and L and U are restarted. Doubling the starting time at each iteration has been shown to be reasonably efficient (see [11] for a discussion).
Applications of the algorithm
Numerical convergence: First order reaction
To characterize numerically the convergence properties of the DCFTPSSA, consider the first order reaction where species A is created at a (normalized) constant rate k from a source and degraded to a sink:
Here P_{j }denotes the probability of having j molecules of A and and are step operators [4]:
f(j) = f(j + 1) and f(j) = f(j  1) acting on a function f(j). For the usual initial condition with 0 molecules, the timedependent solution of Eq. (12) is a Poisson distribution with timedependent parameter k(1  e^{t}) [15]. Equation (12) is an instance of the immigrationdeath process which appears in different settings in the stochastic processes literature.
If, as a proxy for sampling the stationary distribution π, one obtains samples of P(j, T_{s}0, 0) from repeated runs of the SSA for a finite time T_{s}, this will lead to a systematic error that will not disappear as the number of samples (and the CPU time) is increased. The use of the DCFTPSSA eliminates this source of error, as shown in Fig. 1a. This figure also shows that the guaranteed convergence of the DCFTPSSA incurs a modest additional CPU cost. The increased computational cost is twofold: increased memory requirements, since we need to store the history of the dominating process as well as the sequence of random numbers used to update the coupled chains; and longer running times, since we need to extend the process backwards for an indefinite (unbounded) amount of time. Fig. 1b presents the statistics of the coalescence times for this reaction. In this simple reaction, the distribution of stopping times is relatively symmetric and concentrated around the mean value, without the long tails that would correspond to long runs started a long time into the past. As the next example shows, the distribution of coalescence times reflects the complexity of the structure of the stationary distribution.
Figure 1. Convergence of the DCFTPSSA for the first order reaction (12). (a) As a function of CPU time, we represent the Euclidean error ∊_{E }of the stationary distribution of Eq. (12) with k = 5 sampled with the DCFTPSSA (+) and with the standard SSA with stopping times T_{s }= 2(○), 4(□), 6(◇). For this simple ME, the limiting value of the Euclidean error of the finitetime SSA is , where α = 1  exp(T_{s}) and I_{0}(x) is the modified Bessel function of the first kind [15]. This means that SSA simulations that are run for a time T_{s }will converge to a systematic sampling error, indicated by the dotted lines. This source of error is eliminated when using the DCFTPSSA, which shows no flooring for ∊_{E }and the expected N^{1/2 }scaling with the number of Monte Carlo samples [26]. The guarantees provided by the DCFTPSSA come at a modest computational cost, which is comparable to that of long SSA runs. (b) The distribution of coalescence times T_{c }for the DCFTPSSA is relatively symmetric and concentrated around the mean with a rapid decay for long times. The data presented corresponds to 6000 runs. This distribution reflects the benign structure of the unimodal stationary distribution of this particular ME, which makes long coalescence times unlikely.
Multistability: Genetic toggle switch
The mutual activation and repression of groups of genes in regulatory networks can lead to multistability allowing cells to attain different states [5,21]. An important and difficult problem is to find the probabilities of the different states and the expected switching times. Previously [15], we applied the DCFTPSSA to the standard toggle switch with two Hillrepressed genes [22]. We now apply the algorithm to a more complex model of two mutually activating genes [21] with a complicated activation function which is not of the standard Monod form:
with the activation given by
where n_{A }and n_{B }are the number of protein molecules, γ is the basal production rate and κ_{ij }are parameters. The functional form of the activation appears as a consequence of particular properties of this system: each transcription site can be occupied by up to four monomers and becomes activated when a tetramer is bound. However, note that f(n) is monotonic and sublinear and therefore the DCFTPSSA is applicable.
For certain choices of parameters, the stationary distribution of the system is bimodal: the peak located at the origin corresponds to both genes being 'off', while the other mode indicates both genes are 'on' (Fig. 2a). The extreme bimodality of this distribution makes its sampling difficult using the standard SSA. As can be seen in Fig. 3a, if we start from the initial condition (0, 0), the standard SSA levels off in a similar manner to Fig. 1a, highlighting the presence of a systematic error. In contrast, the DCFTPSSA converges to the stationary distribution at the expected N^{1/2 }rate.
Figure 2. Sampling of the stationary distribution for the bistable gene network (13) using different methods. (a) The 'true' stationary probability distribution π for the ME (13) calculated numerically with the approximate eigenvector method [15]. The parameters are κ_{B }= 25, κ_{A }= 12, κ_{A0 }= κ_{B0 }= 60, κ_{A1 }= κ_{B1 }= 10, κ_{A2 }= κ_{B2 }= κ_{A3 }= κ_{B3 }= 1, and γ = 0.01. The locations of the two modes match the fixed points of the corresponding deterministic system. Note the extreme asymmetry of the bimodal probability distribution. (b) The estimate of π obtained from 10^{4 }samples of the DCFTPSSA reproduces the presence of both modes and their relative weights. (c) Estimate of π from 10^{4 }samples of the SSA started at (0,0) with T_{s }= 10^{3}. (d) Estimate of π obtained from 10^{4 }SSA simulations started from 10^{4 }different initial conditions chosen uniformly at random on the 100 × 100 lattice closest to the origin and run for T_{s }= 10^{3}. (e) Estimate of π obtained from 10^{4 }SSA simulations, 5000 of them started from the origin and the other 5000 from the other mode and run for T_{s }= 10^{3}. (f) Estimate of π obtained from 10^{4 }samples from a long SSA run sampled at interval Δt = 10^{3}. Note the different scale on the zaxis for (c) and (e) and how the SSA runs (c)(f) do not capture the overall structure of π.
Figure 3. Convergence of the DCFTPSSA for the bistable gene network (13). (a) As a function of CPU time, we represent ∊_{E}, the Euclidean error of the sampled distributions estimated using: the DCFTPSSA (+), as in Fig. 2 (b); the SSA with T_{s }= 1000(○), as in Fig. 2 (c); the SSA started from the two modes (*), as in Fig. 2 (d); the SSA started from uniform initial conditions (∇), as in Fig. 2 (e); and the SSA uniformly sampled from a long run (□), as in Fig. 2 (f). For each scheme, we produced N = 100, 316, 1000, 3162 and 10000 samples to show how the error improves as the number of samples increases. The DCFTPSSA converges to the stationary distribution at the expected N^{1/2 }rate, whereas the approximate estimates obtained using the SSA level off in a similar manner as in Fig. 1a. (b) The distribution of coalescence times for the DCFTPSSA for this network is bimodal with a very long tail for the second mode, indicating the likelihood of long coalescence times. The data presented corresponds to 6000 runs.
Figure 2b also shows that the probability sampled with the DCFTPSSA captures the global structure of the probability distribution even in this extreme example. On the other hand, closer inspection of the SSA simulations started from the (0, 0) reveals that for short stopping times, the process remains at the mode located near the origin (Fig 2c). Although simple heuristics on how to choose the initial condition have been suggested to improve the sampling of π with the SSA, Figure 2 shows that similar missampling errors appear if we run the standard SSA from a variety of initial conditions. Fig. 2d shows that sampling the initial condition from a uniform grid in state space does not capture the full features of the distribution since this initial condition does not represent a consistent sampling for stationarity. If we use the fixed points of the corresponding deterministic system as initial conditions for the SSA, we would still lack the probability mass associated with each mode. For instance, starting half of the simulations at the origin and the remaining at the other fixed point provides little improvement since almost half of the simulations remain near the origin (Fig. 2e). Similar errors appear if we sample a long SSA run at fixed intervals Δt to provide independent samples as intiial conditions (Fig. 2f), or even if we use samples drawn from the true stationary distribution as initial conditions for the SSA.
We can understand why the stationary distribution of this system presents such a challenge for the finitetime SSA by considering the mean first passage times. Figure 4a shows the average time to reach all other states from the origin and Fig. 4b shows the average time to reach the other mode. To be certain that an SSA run will produce correct samples from the stationary distribution, it must visit each mode several times. For the system considered in Fig. 2 we need stopping times on the order of 10^{7 }to be certain that the simulation has not been stuck in one mode. With our implementation, the DCFTP compares favorably with the SSA wtih T_{s }= 10^{7 }(data not shown).
Figure 4. Mean transition times for the bistable gene network (13). (a) The mean first passage time ξ to reach the origin for the lattice points of the state space close to the origin. The escape time from the mode located away from the origin is 2 × 10^{5}. (b) The mean first passage time from the origin to the other mode is 3 × 10^{3}.
Figure 3a summarizes the CPU times for the different SSA sampling schemes shown in Fig. 2 compared to the DCFTPSSA. Again, the DCFTPSSA introduces a reasonable overhead but provides guarantees that no systematic sampling error exists. To understand how the extreme bimodality of this distribution affects the running time of the DCFTPSSA, Figure 3b shows the statistics of the coalescence times for this system. As compared with Fig. 1b, the distribution of coalescence times is bimodal with a second mode at long coalescence times a long tail. This reflects the complex structure of the stationary distribution in state space which induces longer coalescence times to guarantee the correct sampling. As explained in the Discussion section, the numerical performance of the algorithm in situations where long runs are more likely can be improved by the use of rejection sampling schemes.
This simple example illustrates the potential pitfalls of using the standard SSA for multimodal systems with long switching timescales. If the SSA is run with too short stopping times, one runs the risk of missing important features of the distribution that could lead to erroneous conclusions about the number and relative weight of possible states. These problems become more acute as the dimensionality of the state space increases.
Steadystate dynamics: Generalized repressilator
Although regularity and robustness are important for their reliable operation in timekeeping, circadian and synchronization processes, cellular oscillators have a biochemical basis and are subject to high levels of noise [23,24]. In previous work [15], we studied the stochastic version of the repressilator, a synthetic transcriptional oscillator that consists of three mutually repressing genes in a loop (Fig. 5a) and has been implemented in E. coli [23]. Experiments on the original repressilator showed that the oscillations are very noisy and stochastic models are required to capture these features.
Figure 5. Noise characteristics of the generalized repressilator (14). (a) Detailed diagram of the reactions in the standard repressilator with three genes involving six chemical species, as implemented with our stochastic algorithm. In the simplified cartoon, each circle represents a gene repressing the subsequent gene in a cycle. The generalized repressilator studied here considers cycles with odd number of genes n = 3, 5, 7, 9. (b) The top panel shows time series of one of the proteins for the deterministic model of the repressilator with n = 3 (filled) and n = 9 (dashed) genes with parameters k_{M }= 25, d_{M }= 3, θ = 3, k_{R }= 4 and α = 2. The lower panel shows the corresponding time series of the SSA started from stationarity, guaranteed by the DCFTPSSA. For the top panel, the yaxis has units of protein concentration, whereas for the lower panel the yaxis has unitos of number of proteins. (c) The top panel shows the distribution of the period for the repressilator with n = 3 genes, while the bottom panel shows the same distribution for the generalized repressilator with n = 9 genes. Note that the distribution for n = 3 is skewed with a long right tail, while that of n = 9 is more symmetric, but has fatter tails than would be expected for a Gaussian distribution. The histograms were obtained from timeseries with 10^{4 }periods. (d) The top two panels show the dependence of the mean (∘) and variance (□) of the period distribution with n. The lines indicate a linear fit for the means and a quadratic fit for the variances. The inset in the top right panel, shows that, for this set of parameters, the relative noise of the period, as measured by the coefficient of variation (*), is minimal for a length of n = 7 genes in the loop. The two lower panels show the skewness (∇) and kurtosis (◇) for the period distribution. The skewness decreases to zero as n grows, in accordance with the observed decrease of the asymmetry of the distribution. The kurtosis does not disappear as n grows indicating the presence of longtails. Note that the kurtosis also reaches an apparent minimum at n = 7.
Here, we investigate the stochastic properties of the generalized repressilator with an arbitrary number n of genes in the loop [25]. Müller et al studied the deterministic version and showed that the system oscillates when n is odd, as expected by analogy with the ring oscillator in electronic circuits (see Fig. 5b). This system allows us to study the dependence of the variability of the oscillations with the number of genes and to showcase the scalibility of our algorithm as the number of variables (and the dimensionality of the state space) increases.
We now use the DCFTPSSA to characterize the periodicity of the stochastic oscillations of the generalized repressilator:
where the shorthand P_{j }denotes the state and all integers are i mod n. Here M_{i }are the mRNA levels (with production rate k_{M }and degradation rate d_{M}) and R_{i }are the corresponding proteins (with basal rate k_{B }and linear production rate k_{R}). The repressilator network fulfills the conditions of applicability of the DCFTPSSA and we have used our algorithm to generate timeseries which are guaranteed to be at stationarity. The fact that the system has a persistent, oscillatory dynamics does not preclude it from being stationary. As expected, our DCFTPSSA simulations show that the stationary distribution π conforms to the shape of a circular ridge in 2ndimensional space, which is directly related to the deterministic limit cycle [4]. In this case, the probability mass is unimodal along the ridge, which means that sampling from a long timeseries is unproblematic since there is no risk of avoiding regions of state space that have high probability.
As one would expect from the deterministic analysis, the mean period increases linearly with n (Figure 5d). This follows from the fact that the oscillatory behavior propagates in a wavelike manner around the loop. If we assume that the period is formed as the sum of n independent genes rising and falling in sequence, then a circuit with n genes will have a period whose mean scales linearly with n, as shown in Figure 5d in accordance with the deterministic model. However, the shape and moments of the distribution of the periods change significantly as a function of n, as shown in Figs. 5c–d. The distribution of the period for shorter circuits will necessarily be rightskewed since there is a minimal waiting time, akin to a refractory period, before the gene can rise again. This asymmetry is observed in the case of n = 3 but has almost disappeared for n = 9, and is captured by the skewness, which decreases towards zero as n increases.
Our numerics also indicate that the relative variability of the period is not constant as the number of genes in the loop increases. Figure 5d shows that the variance of the period increases quadratically, which implies that the successive periods are not independent. This implies that, for the set of parameters in Figure 5, there is an optimal length of n* = 7 genes in the loop, for which the relative fluctuations of the period, as measured by the coefficient of variation, are minimal. Note also that the kurtosis remains almost constant and positive, which indicates that there are fat tails even for longer circuits. Interestingly, the kurtosis also attains a shallow minimum at n* = 7, indicating a relative decrease in the dispersion of the distribution. Another important characteristic of an oscillator is the rise time, which gives an indication of its precision. Our numerics find no change in the variance of the rise times as the number of genes increases (results not shown). This is expected since the rise time of a single gene is almost independent of the preceding events unlike the period, which is an aggregated quantity and therefore more susceptible to propagated noise. The investigation of the noise characteristics of networks of transcriptional oscillators will be the object of further study.
Discussion
The present work presents a detailed implementation of the DCFTPSSA that could be integrated with other packages in Computational Systems Biology. We have also provided a mathematical proof of the algorithm with an explicit statement of the limits of its applicability. This detailed description is key to the extension of the algorithm to a wider class of systems. Specifically, the DCFTPSSA can be applied to conversion reactions of the type A → B with the realistic assumption that the monotone propensity function only depends on n_{A}. Unfortunately, the extension to encompass bimolecular reactions of the type A + B → C does not seem to be trivial, since the partial ordering used in this paper will not be preserved and there is no dominating process with known stationary distribution readily available. The latter problem can be addressed partially by using the CFTP under the approximation that there is an upper bound on the number of molecules in the state space. If the bound is chosen to be large enough, it can be shown numerically that the error will be negligible. However, this approximate method will not carry the guarantees of stationarity that the DCFTPSSA provides.
From the numerical viewpoint, the DCFTPSSA is guaranteed to converge almost surely in finite time, but there is no upper bound on the coalescence times T_{c}. Our numerics show that the distribution of coalescence times can be longtailed when the structure of the stationary distribution is complex (Fig. 3b).
If a simulation is interrupted prematurely by an impatient user, the final sample will be biased. An alternative perfect sampling scheme is the FMMR algorithm [14], which uses rejection sampling to circumvent this problem. Our experience has shown that typically a small fraction of runs takes a very long time to converge. Being able to remove these would speed up the algorithm significantly. The bimodal example illustrates this point: if we were able to place a cutoff after the first mode, a large portion simulations would be accepted and at the same time there would be a significant save in terms of both CPU time and memory. As indicated by the examples in this paper, it is important to note that the DCFTPSSA does not present obvious problems with scalability, as the overhead incurred to provide a certificate of stationarity is moderate. Although the computational cost of the algorithm depends on the intrinsic structure of the network, we have applied the DCFTPSSA to various networks with several tens of variables.
In addition to producing guaranteed sampling from the stationary distribution, the DCFTPSSA can be used to provide initial conditions for ordinary SSA runs. Since any Markov process started from stationarity will remain there for all future times, these runs are guaranteed to represent the stationary timetraces of the system. This is important for the numerical characterization of properties such as escape times and autocorrelation times of systems with high variability, e.g., with underlying multistable, oscillatory or excitatory behaviour [15].
Conclusion
The SSA is an exact procedure to sample the timedependent probability distribution of the ME of general chemical reaction networks at all times [7,8]. However, it provides no guarantees when the aim is sampling from the stationary distribution. The DCFTPSSA presented here addresses this problem for a class of networks of relevance to genetic and enzymatic regulation. Our algorithm provides guaranteed stationary sampling and thus removes one of the sources of uncertainty in stochastic simulations. This can aid in the characterization of regulatory circuits and in the testing of model hypotheses for these systems.
Authors' contributions
MH and MB developed the method, completed the proof and wrote the paper. MH implemented the algorithm for the simulations.
Acknowledgements
We would like to thank Matias Rantalainen, Vincent Rouilly and Sophia Yaliraki for valuable discussions and feedback. Research funded by the UK EPSRC through the Life Science Interface and the Mathematics Panels.
References

Elowitz MB, Levine AJ, Siggia ED, Swain PS: Stochastic gene expression in a single cell.
Science 2002, 297:11831186. PubMed Abstract  Publisher Full Text

Raser JM, O'Shea EJ: Control of stochasticity in eukaryotic gene expression.
Science 2004, 304(5678):18111814. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Raj A, Peskin CS, Tranchina D, Vargas DY, Tyagi S: Stochastic mRNA synthesis in mammalian cells.
PLoS Biol 2006, 4(10):e309. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

van Kampen NG: Stochastic processes in physics and chemistry. 2nd edition. Elsevier; 1992.

McAdams HH, Arkin A: Stochastic mechanisms in gene expression.
Proc Natl Acad Sci U S A 1997, 94(3):814819. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Paulsson J: Summing up the noise in gene networks.
Nature 2004, 427:415419. PubMed Abstract  Publisher Full Text

Gillespie DT: A general method for numerically simulating the stochastic time evolution of coupled chemical reactions.

Gillespie DT: A rigorous derivation of the chemical master equation.

Valeriani C, Allen RJ, Morelli MJ, Frenkel D, ten Wolde PR: Computing stationary distributions in equilibrium and nonequilibrium systems with forward flux sampling.
Journal of Chemical Physics 2007., 127 PubMed Abstract  Publisher Full Text

Propp JG, Wilson DB: Exact sampling with coupled Markov chains and applications to statistical mechanics.

Thönnes E: A primer on perfect simulation.
In Springer Lecture Notes in Physics, Springer Edited by Mecke KR, Stoyan D. 2000, 554:349378.

Kendall WS, Møller J: Perfect simulation using dominating processes on ordered spaces with application to locally stable point processes.

Fill JA, Machida M, Murdoch DJ, Rosenthal JS: Extension of Fill's perfect rejection sampling algorithm to general chains.

Hemberg M, Barahona M: Perfect Sampling of the Master Equation with Applications to Gene Regulatory Networks.
Biophysical Journal 2007., 93(2) PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Lindvall T: Lectures on the coupling method. John Wiley & Sons, Inc; 1992.

Doob JL: Markoff chains – Denumerable case.
Transactions of the American Mathematical Society 1945, 58(3):455473.

Rao CV, Arkin AP: Stochastic chemical kinetics and the quasi steadystate assumption: Applications to the Gillespie algorithm.

Gadgil C, Lee CH, Othmer HG: A stochastic analysis of firstorder reaction networks.
Bulletin of Mathematical Biology 2005, 67(5):901946. PubMed Abstract

Häggström O, Nelander K: Exact sampling from antimonotone systems.

Widder S, Schicho J, Schuster P: Dynamic patterns of gene regulation I: simple two gene systems.
Journal of Theoretical Biology 2007, 246(3):395419. PubMed Abstract  Publisher Full Text

Gardner TS, Cantor CR, Collins JJ: Construction of a genetic toggle switch in Escherichia coli.
Nature 2000, 403:339343. PubMed Abstract  Publisher Full Text

Elowitz MB, Leibler S: A synthetic oscillatory network of transcriptional regulators.
Nature 2000, 403:335339. PubMed Abstract  Publisher Full Text

Goldbeter A: Computational approaches to cellular rhythms.
Nature 2002, 420:238245. PubMed Abstract  Publisher Full Text

Muller S, Hofbauer J, Endler L, Flamm C, Widder S, Schuster P: A generalized model of the repressilator.
Journal of Mathematical Biology 2006, 53:905937. PubMed Abstract  Publisher Full Text

Mitzenmacher M, Upfal E: Probability and computing: randomized algorithms and probabilistic analysis. Cambridge University Press; 2005.