Skip to main content

A variational approach to parameter estimation in ordinary differential equations

Abstract

Background

Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters.

Results

The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters.

Conclusions

The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

Background

Frequently, signalling pathways and chemical reaction networks in systems biology are modelled by ordinary differential equations (ODE). In many cases, the reaction networks are open systems comprising external inputs like drug stimuli. The system is then modelled by a non-autonomous ODE.

Similarly, modules of reaction networks are open systems. The nodes they have in common with the surrounding network are not or not entirely determined by the module species. They can be considered as intrinsic inputs and again the system can be modelled by a non-autonomous ODE. An example for such a cross-talk can be found in[1].

While reaction rates and initial reactant concentrations form a countable set of parameters, inputs correspond to an innumerable set of parameters since in general, every function of time is possible as input and each function value at each time point is a free parameter. Commonly, if measurements for the inputs are available, non-parametric estimates like smoothing splines are employed to describe the input data[2, 3]. Given the input, an objective function depending on rate parameters and initial values is defined and its minimum is approached by numerical optimization methods. In this way, the problem of infinitely many parameters is avoided. As we will show, one problem associated to this approach is that it does not account for the uncertainty present in the input. As a consequence, estimated parameter confidence intervals do not cover the actual variability, i.e. they are too small.

Therefore, it is preferable to parametrize the input which is possible if certain knowledge about origin and processes underlying inputs is available. This enables a reasonable choice of basis functions and the parametrization becomes finite. Following this approach, the problem of erroneous confidence intervals is circumvented presuming that the input model is correct. However, this assumption is problematic if only sparse information about the inputs and few measurement points are available.

We propose to approach the problem of input parametrization by calculus of variations. In the Result section, the system’s objective function used for ordinary parameter estimation is extended to a functional to be minimised. The original non-autonomous ODE is transformed into an augmented autonomous ODE. The result is interpreted and applied to simulated data.

Results and discussion

Derivation of the augmented ODE system

In conventional parameter estimation, the objective function to be optimised is the likelihood function or the χ2 function. If a reaction network with species y μ ,μ=1,,n and reaction parameters p k ,k=1,,r, comprises inputs x ν ,ν=1,,m, the dynamics of the system is described by the model

y ̇ μ (t)= f μ y(t),x(t),p , y μ (0)= y μ , 0 ,
(1)

with dynamic variables y μ and time-dependent input functions x ν (t), each of them collected in the vectorsy R n andx R m . In the following, the dependence on the whole course of x will be emphasized by the notation [x]. Furthermore, it is assumed that the input function x(t) is differentiable. Commonly used inputs like step functions or injections are rather distribution like than differentiable functions. However, it is assumed that on the physiological level the acting input is more accurately described by a differentiable function. The χ2 objective function

χ 2 ( [ x ] , p ) = i , μ y μ ( t i , [ x ] , p , y 0 ) y μ , i D σ y μ , i D 2 + i , ν x ν ( t i ) x ν , i D σ x ν , i D 2
(2)

penalizes distances between species measurements y μ , i D and model prediction y μ (t i , [x], p,y0) at time points t i quadratically and weighted by the measurement uncertainties σ y μ , i D . In addition, input measurements x ν , i D are compared with the input function values x ν (t i ). In particular, χ2 is already a functional of [x]. In case of Gaussian noise, eq. (2) coincides with the maximum likelihood estimator.

Our aim is to find a unique input function which minimises the functional defined in eq. (2). To this purpose, we compute the first variation and check under which condition the first variation vanishes. See[4] for a general introduction to variational calculus as well as sections 1-2 in the Additional file1. For the first variation we obtain

δ χ 2 h = 2 i ( res y ( t i ) · Φ ( t i ) 0 t i Φ 1 x fh d τ = δyh + res x ( t i ) · h ) .
(3)

The trajectory variation δyh is derived by eq. (1) and is expressed by variation of constants: Φ(t) denotes the fundamental system of the homogeneous linear problem ϕ ̇ = y with the matrix y f of first derivatives of f with respect to y and x fh constitutes the inhomo- geneity. Furthermore, a weighted residual function is defined as res y μ ( t i )= y μ , i D y μ ( t i ) σ y μ , i D 2 , analogously res x ν . For a detailed derivation see sections 2-5 in the Additional file1.

Next, h needs to be separated. Similarly to Euler-Lagrange’s equation[4], partial integration needs to be performed to extract h from the integral. However, therefore the sum in eq. (2) needs to be extended to an integral, all time-discrete measurement points y μ , i D and x ν , i D have to be replaced by continuous and differentiable data representations by means of a mappingS: R N C 1 (R) from N discrete values to a differentiable function. The resulting representations S y μ (t), S x ν (t) as well as S σ y μ (t), S σ x ν (t) need to be defined at least on a finite interval [0, T where T denotes the latest time point to be considered. After partial integration, the first variation for the just defined time-continuous χ2 functional reads

δ χ 2 h = 2 0 T x f ( Φ 1 ) t T Φ res y d τ = : u + res x · h d t ,
(4)

with the auxiliary function u. The transpose is denoted by . The integral, i.e. the first variation, vanishes for all choices of h if and only if the integrand is zero, leading to eq. (5). The auxiliary function u is equivalently expressed by its corresponding differential equation, eq. (6). Here, it is used that Φ−1 is a fundamental system for ϕ ̇ = y which follows from Φ being a fundamental system for ϕ ̇ =+ y . Together with eq. (1) we obtain:

0 = x f u + res x u ̇ = y f u res y y ̇ = f ( y , x , p ) .
(5)

The right-hand sides of eqs. (67) depend on state variables y, u, and x, the latter being constrained by eq. (5). Particularly, if the input enters linearly in the dynamics of the reaction network, x f is independent of x and eq. (5) can be directly solved for x, i.e.x= S x S σ x 2 x f u. However, even in the non-linear case, the implicit function theorem provides the possibility to check locally whether eq. (5) uniquely defines x(u y). For the discussion of a global version of this statement, see section 6 of the Additional file1.

From the definition of u it follows that u(T) = 0 needs to vanish at the final time point T. Hence, the augmented ODE system needs to satisfy two-way boundary conditions y(0) = y0 and u(T) = 0. This fact constitutes a remarkable difference to the original initial value problem.

Interpretation

Starting from a dynamic system with inputs and measurements for both, state variables and inputs, we have derived differential equations for both of them. The original initial value problem has been transformed into a boundary value problem which is to be solved numerically. The solution trajectories Y(t|p,y0) = (y(t|p,y0),x(t|p,y0)) minimise the χ2 functional for given dynamic parameters p and initial values y0. However, there is still notable freedom in the choice of data and uncertainty representations, denoted by S y , S x and S σ , which decides about the meaning of the solution trajectories.

One possibility to define time-continuous data representations S y and S x is smoothing splines. They constitute prior knowledge for each component about shape and time-scale of changes based solely on the measurement points. Also S σ needs to be chosen appropriately. Differences between model prediction and data prior are usually weighted byw(t)= 1 S σ 2 ( t ) at each time point t. Especially if data sampling is sparse, the data prior has larger uncertainty when far away from measurement time points. In this case, a reasonable choice of w(t) is given by

w τ (t)= i 1 2 Π τ 2 e ( t t i ) 2 2 τ 2 1 σ i 2 ,
(6)

i.e. a sum of Gaussians located around the measurement points. The parameter τ is a measure for the correlation length of the data prior.

Once data and uncertainty representations are chosen, the solution trajectories Y can be employed for conventional parameter estimation minimising

χ 2 (p, y 0 )= μ , i Y μ , i Y μ ( t i | p , y 0 ) σ Y μ , i 2
(7)

over the finite dimensional parameter space of p and y0. Note that the time-discrete χ2 function and the time-continuous χ2functional do not coincide exactly. Thereby, different measures of optimality are applied to input functions and parameters. This difference is resolved in the asymptotic case of infinitely many measurement points.

The distinction between parameter estimation and input reconstruction has further implications on the estimation of uncertainty bounds. Confidence intervals can only be assigned to the dynamic parameters and initial conditions. In contrast, the input becomes a usual state variable by construction. For state variables, the confidence region in parameter space needs to be mapped to state space by prediction, i.e. by evaluating the model for different parameter values within the confidence region. This can e.g. be realized by parameter sampling using MCMC methods. Alternatively, profile likelihoods can be employed[5].

Technical remarks

It is important for the interpretation of x(t) as a species concentration that x(t) > 0 for all times t [0,T]. This is not imposed a priori on the solution x(t). Rather, it needs to be enforced by construction, analogously to the state variables in the ODE of the dynamic system. This can be realized by the following extension of the dynamic system,

y ̇ = f ( y , x , p ) x ̇ = D ( t ) x ,
(8)

with a diagonal matrixD(t)=diag d 1 ( t ) , , d m ( t ) of new inputs d 1 ,, d m . By construction, x can not change sign over time. The choice S D (t) = 0 and S σ D (t)=const. for all t reflects a constant input prior with penalized first derivative and can serve as starting point. Besides enforcing positivity of the input, the extension by eq. (11) presents a workaround for dealing with non-linear inputs because the new input variables d ν enter linearly and the old inputs x ν become regular state variables.

If f depends linearly on x, eq. (5) can be solved for x explicitly. This ensures computational efficiency. In the non-linear case, matrix inversion has to be performed in each evaluation step of the ODE which might slow down the computation of the solution remarkably. Alternatively to the introduction of new input variables, eq. (11), the computationally intensive approach can be avoided by a change of variables. This is possible if state variables and input variables factorize, i.e. if

f μ ( y , x , p ) = ν = 1 m g μν ( y , p ) x ~ ν + g μ , 0 ( y , p ) , μ = 1 , , n ,
(9)

where g μν and gμ,0 do not depend on the input variables which have been transformed to x ~ =φ(x,p) by a coordinate transformation φ. Examples could be φ(x) = x2 orφ(x, K D )= x K D + x for a bimolecular or an enzymatic reaction, respectively. The possibility of a change of variables covers a broad range of biologically relevant reaction networks.

Although computation for linear input is remarkably faster than for non-linear input, it is still slower than solving an initial value problem. On the other hand, the solution of the boundary value problem is already optimal with regard to the input course. Therefore, computing time has to be compared to the time a parameter optimization algorithm takes to estimate the parametrized input course. The comparison will strongly depend on the number of parameters that are necessary to parametrize the partially unknown input. So far, there has not been a comprehensive study comparing the two methods.

Application to simulated data

The approach is applied to the following toy model:

x A B.
(10)

The forward reactionAB is mediated by x while the back reactionBA is unaffected by the input x. According to eqs. (57), the augmented ODE system for A, B and x is given by

A ̇ = k 1 Ax + k 2 B B ̇ = k 1 Ax k 2 B u ̇ A = k 1 x ( u A u B ) A S A S σ A 2 u ̇ B = k 2 ( u A u B ) B S B S σ B 2
(11)

with the auxiliary state variables u A , u B , the data representations S A , S B and the uncertainty representations S σ A 2 and S σ B 2 . The input x is related to the other state variables byx= S x + S σ x 2 k 1 A( u A u B ). Several input functions x have been chosen for data generation, among them an exponential decay, xeαt, an activation dynamics with a slow decay after a fast increase, xeαt − eβt with α > β, and a Gaussian input,x 1 2 Π τ 2 e ( t t dip ) 2 2 τ 2 . The example is numerically implemented in C and in R[6]. Optimization is performed by a Gauss-Newton algorithm for nonlinear least-squares estimation.

The purpose of this section is to compare parameter estimation for the variational and the fixed input approach. The input data prior, i.e. the smoothing spline through the simulated input data points, is employed as input function for the fixed input approach.

Examples with Gaussian input are depicted in Figures1 and2. All components, A, B, and x depicted in Figure1A-B have been measured at 20 time points. In this case of dense sampling, the data priors, charted as dotted lines in Figure1A, come close to the estimated time-courses, charted as dotted lines in Figure1B. This is reflected in the distributions of the parameter estimates in Figure1C: for the same set-up, 1000 noise realizations have been generated and the variational approach has been used for parameter estimation. In order to compare the result with the fixed input approach, the data prior of x has been employed as input and conventional parameter estimation has been performed. Hence, in the setting of dense sampling and small noise, both estimation approaches perform equally in terms of accuracy and precision.

Figure 1
figure 1

Rich input measurement. (A-B) Simulated data for the species A, B and the input x. True time courses are denoted by continuous lines. Data points are subject to Gaussian noise with σ = 0.1. (A) Data representations are indicated as dashed lines, (B) solution trajectories after parameter estimation are shown as dashed lines. (C) Comparison of parameter distributions obtained from 1000 repetitions of data generation and parameter estimation for the variational and the fixed input approach.

Figure 2
figure 2

Poor input measurement. (A-B) Input reconstruction – simulated data for the species A, B and the input x. True time courses are denoted by continuous lines. Data points are subject to Gaussian noise with σ = 0.1. For the input, only 4 data points are provided. (A) Data representations are indicated as dashed lines, (B) solution trajectories after parameter estimation are shown as dashed lines. (C) Comparison of parameter distributions obtained from 1000 repetitions of data generation and parameter estimation for the variational and the fixed input approach.

A rather different situation is depicted in Figure2A-B. The input x is measured at four time points only, leading to a poor data prior shown as green dotted line in Figure2A. Like before, the species A and B have been measured at 20 time points. Most of the information about the dynamics of the input is encoded in these measurements. The correlation time τ has been chosen to be much smaller than the distance between time points allowing for much interstitial variability. The resulting trajectories Y after parameter estimation are shown as dotted lines in Figure2B. The true input curve is reconstructed almost entirely. The noticeable fluctuations are caused by coincidental noise correlations between species A and B: simultaneous deviations from the true course in opposite directions lead to immediate breakouts of the reconstructed input.

Also for this set-up, 1000 noise realizations have been generated for the comparison of the variational and the fixed input approach. The parameter and initial value distributions for both approaches are shown in Figure2C. Since the true input can be reconstructed, the variational approach is able to estimate all parameters accurately. In contrast, when the input is fixed to the apparent input data prior, parameter estimation leads to biased parameter estimates.

Finally, we investigated the coverage probability[7] of the confidence intervals derived from the variational and the fixed input approach: for each simulated data set, parameter estimation is performed, confidence intervals are computed and the information if the true parameter value is situated within the 68%/90% confidence interval is collected. This information is cumulated over many runs of data generation.

Figures3A and B show the results for Gaussian input with 20 input measurements and 4 input measurements respectively. In each case, 20 measurement points have been provided for each of the species A and B.

Figure 3
figure 3

Coverage. Coverage for variational approach (blue) and fixed input approach (red). Continuous lines correspond to a 90% coverage probability, dashed lines to 68%. Both probabilities are indicated as black dashed horizontal lines. (A) Example with 20 input measurement points, (B) example with 4 input measurement points.

For both estimation approaches, confidence intervals of estimated parameters and initial values have been produced by means of the profile likelihood approach[8] with respect to eq. (9).

For the set-up with 20 input measurement points, both estimation approaches provide accurate estimators with similar variances as confirmed by Figure1. However, as Figure3 shows, the coverage differs significantly between the two approaches. Confidence intervals for k1 and k2 are systematically underestimated by the fixed input approach. The variational approach in contrast is able to correctly take the degrees of freedom in the input into account. Thus, the coverage is close to the expected values.

For the set-up with 4 input measurement points, the variational approach performs significantly better than the fixed input approach with respect to coverage. However, also the variational approach produces confidence intervals that are slightly too small for the dynamic parameters k1 and k2, Figure3B left, and too small for the estimated initial values, Figure3B right. The reason for this behaviour is a combination of the small correlation length τ and the objective function given by eq. (9). Short values of τ allow that the input function has fast fluctuations. Especially around the input measurement points, function values tend to punctually approach the measured values, favoured by the time-discrete objective function. Since these fluctuations occur at a short time scale, it has little influence on the course of A and B and thereby, estimation of the dynamic parameters is almost unaffected.

This case shows that τ needs to be chosen appropriately for the problem: small for comprehensive input reconstruction and larger for propagation of uncertainties. A second possibility would be to adapt statistical results for conventional parameter estimation to the case of time-continuous objective functions.

Conclusion

In many applications, it is difficult to guess a proper input model because input data is not available or too noisy. Instead of parametrizing the input, we employed variational calculus to transform the ODE into an augmented system of ODEs describing the original and the input components. The solution of this system minimises the χ2 functional which plays a central role and is directly associated to the objective function of the original estimation problem. Since the extension of the χ2 function to the χ2 functional is not unique, the new functions, i.e. continuous data and uncertainty representations, need to be chosen intentionally. To this end we propose smoothing splines that have a concrete interpretation as data priors. Especially in the case of sparse sampling we propose to use weighting functions for the uncertainties. By this means, existing measurement points are taken into account and the course between time points is not excessively constrained by the data prior.

In the field of control theory and optimal control, so called cost functionals take the role of our χ2 functional. Once chosen the appropriate χ2 functional, our approach to input estimation can be embedded in the general framework of Pontryagin’s minimum principle[9] and eqs. (67) can be identified as a Hamiltonian system.

We showed that our combined variational approach to parameter estimation enables the assembly of all information present in species and input measurements. By this means, it accounts properly for variability in the input due to measurement uncertainties and produces correct confidence bounds. Depending on the situation, the combination of all information leads to comprehensive reconstruction of the input curves. Information about the dynamics of the input can be concentrated in the species measurements like Figure2 shows. In such cases our approach clearly outperforms conventional approaches. The variational method is even applicable if no input measurements are available or if species are partially unobserved. A prominent example where the presented method could be applied is the PI3K/AKT/mTOR pathway[10]. Even though various mTOR complexes and their phosphorylated states can be measured, it is not clear how they mediate AKT activation. By applying the variational method to AKT data, it would be possible to reconstruct the required mediator and subsequently relate it to mTOR complex measurements.

A completely different field of application is network modularization. The entire network can be dissected preferably at nodes where measurements are available. These nodes are then treated as independent inputs thus disentangling the network. In this way, the number of equations the variational approach has to deal with is kept small and computational efficiency is ensured.

A further step after the introduction of a time-continuous objective function would be to use the same function for parameter estimation. The time-continuous version of the objective function is closely related to the original function. Therefore, we are confident that it is possible to endow the time-continuous objective function with statistical meaning. This would not only allow for employing the same objective for parameter estimation and input reconstruction in our application. It would also enable the transfer of many more results from control theory and make it suitable for statistical inference.

References

  1. Kim D, Rath O, Kolch W, Cho KH: A Hidden Oncogenic Positive Feedback Loop Caused by Crosstalk between Wnt and ERK Pathways. Oncogene. 2007, 26 (31): 4571-4579. 10.1038/sj.onc.1210230.

    Article  PubMed  Google Scholar 

  2. Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmüller U, Timmer J: Structural and Practical Identifiability Analysis of Partially Observed Dynamical Models by Exploiting the Profile Likelihood. Bioinformatics. 2009, 25 (15): 1923-1929. 10.1093/bioinformatics/btp358.http://bioinformatics.oxfordjournals.org/content/25/15/1923.abstract, 10.1093/bioinformatics/btp358

    Article  PubMed  Google Scholar 

  3. Swameye I, Müller TG, Timmer J, Sandra O, Klingmüller U: Identification of Nucleocytoplasmic Cycling as a Remote Sensor in Cellular Signaling by Databased Modeling. Proc Nat Acad Sci. 2003, 100 (3): 1028-1033. 10.1073/pnas.0237333100.http://www.pnas.org/content/100/3/1028.abstract, 10.1073/pnas.0237333100

    Article  PubMed  Google Scholar 

  4. Giaquinta M, Hildebrandt S: Calculus of Variations: The Lagrangian Formalism. 1996, Berlin: Springer

    Google Scholar 

  5. Kreutz C, Raue A, Timmer J: Likelihood based observability analysis and confidence intervals for predictions of dynamic models. ArXiv Preprint. 2011, arXiv:1107.0013v1 [physics.data-an]:p1–p16.

    Google Scholar 

  6. R Development Core Team: R: A Language and Environment for Statistical Computing. 2011, Vienna: R Foundation for Statistical Computing, http://www.R-project.org [ISBN 3-900051-07-0]

    Google Scholar 

  7. Suess EA, Trumbo BE: Introduction to Probability Simulation and Gibbs Sampling with R. 2010, New York: Springer, http://www.R-project.org [ISBN 3-900051-07-0]

    Book  Google Scholar 

  8. Murphy SA, van der Vaart AW: On Profile Likelihood. J Am Stat Assoc. 2000, 95 (450): 449-465. 10.1080/01621459.2000.10474219.http://www.jstor.org/stable/2669386, 10.1080/01621459.2000.10474219

    Article  Google Scholar 

  9. Kirk DE: Optimal Control Theory: An Introduction. 2004, Mineola: Courier Dover Publications, Inc.

    Google Scholar 

  10. Dalle Pezze P, Sonntag AG, Thien A, Prentzell MT, Gdel M, Fischer S, Neumann-Haefelin E, Huber TB, Baumeister R, Shanley DP, Thedieck K: A Dynamic Network Model of mTOR Signaling Reveals TSC-Independent mTORC2 Regulation. Sci Signaling. 2012, 5 (217): ra25-10.1126/scisignal.2002469.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank Raphael Engesser, Clemens Kreutz and Jan Hasenauer for their advice and valuable discussions. This work has been supported by the German Federal Ministry for Education and Research programme Medical Systems Biology SARA (0315394E).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Kaschek.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

DK developed the methodology, wrote the software, designed the study and wrote the manuscript. JT supervised the study and critiqued the manuscript. All authors read and approved the final manuscript.

Electronic supplementary material

12918_2011_1019_MOESM1_ESM.pdf

Additional file 1: Supplement: A Variational Approach to Parameter Estimation in Ordinary Differential Equations. (PDF 232 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kaschek, D., Timmer, J. A variational approach to parameter estimation in ordinary differential equations. BMC Syst Biol 6, 99 (2012). https://doi.org/10.1186/1752-0509-6-99

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1752-0509-6-99

Keywords