Soil water retention curve
Soil water content is a function of soil matric potential ψ under equilibrium conditions, and this relationship θ(ψ) can be described by different types of water retention curves. The soil water retention curve is a basic soil property and is critical for predicting water related environmental processes (Fredlund et al. 1994). Among various soil water retention curve models, the van Genuchten (1980) model is the most widely used one (Han et al. 2010). It is highly nonlinear and can be expressed as:
$$ \theta \left(\uppsi \right)={\theta}_r+\left({\theta}_s-{\theta}_r\right){\left(1+{\left(\alpha \left|\uppsi \right|\right)}^n\right)}^{-1+\frac{1}{n}} $$
(5)
where θ(ψ) is the soil water content [cm3 cm−3] at soil water potential ψ (−cm of water), θ
r
is the residual water content [cm3 cm−3], θ
s
is the saturated water content [cm3 cm−3], α is related to the inverse of the air entry suction [cm−1], and n is a measure of the pore-size distribution (dimensionless). We measured θ(ψ) at 16 soil water potentials for a sandy soil using Tempe pressure cells (at soil matrix potentials ranging from 0 to −500 cm) and pressure plates (at soil water potentials of −1000 and −15000 cm). θ
s
is measured using oven drying method after saturation, and θ
r
is estimated as water content of soil approaching air-dry conditions (Wang et al. 2002). θ
s
and θ
r
are 0.395 and 0.011, respectively. Note that the soil water content (0.375) at zero matrix potential is lower than θ
s
due to the soil water movement under gravity. This paper will focus on the estimation of parameters α and n and their associated 95% confidence intervals.
Parameter uncertainty estimation by linearization of nonlinear model
We express the van Genuchten model (Eq. 5) as:
$$ {\theta}_i=f\left(\beta, {\uppsi}_i\right)+{\varepsilon}_i $$
(6)
where θ
i
is the i th observation for the dependent variable θ(ψ) (i = 1, 2, …16), ψ
i
is the i th observation for the predictor |ψ|. β is a vector of parameters which includes parameters α and n. ε
i
is a random error, which is assumed to be independent of the errors of other observations and normally distributed with a mean of zero and variance of σ
2.
The sum of squared residuals (SSE) for nonlinear regression can be written as:
$$ SSE\left(\beta \right)={\displaystyle \sum }{\left({\theta}_i-f\left(\beta, {\uppsi}_i\right)\right)}^2 $$
(7)
The model has the maximum likelihood when the SSE is minimized. Namely, when the partial derivative
$$ \frac{\partial SSE\left(\beta \right)}{\partial \beta }=-2{\displaystyle \sum}\left({\theta}_i-f\left(\beta, {\uppsi}_i\right)\right)\frac{\partial f\left(\beta, {\uppsi}_i\right)}{\partial \beta } $$
(8)
is zero, parameters β are optimized. Once the optimum values of β are obtained, the parameter uncertainties can be estimated by linearizing the nonlinear model function at the optimum point using the first-order Taylor series expansion method (Fox and Weisberg 2010).
Let
$$ {\boldsymbol{F}}_{\boldsymbol{ij}}=\frac{\partial f\left(\widehat{\beta},{\uppsi}_i\right)}{\partial {\beta}_j} $$
(9)
where \( \widehat{\beta} \) is the optimized value, j refers to the jth of parameters (j = 1, 2, and β
1 = α, β
2 = n).
Assume matrix F = [F
ij
]. In our case,
$$ \boldsymbol{F}=\left[\begin{array}{cc}\hfill \frac{\partial f\left(\widehat{\beta},{\psi}_1\right)}{\partial \alpha}\hfill & \hfill \frac{\partial f\left(\widehat{\beta},{\psi}_1\right)}{\partial n}\hfill \\ {}\hfill \frac{\partial f\left(\widehat{\beta},{\psi}_2\right)}{\partial \alpha}\hfill & \hfill \frac{\partial f\left(\widehat{\beta},{\psi}_2\right)}{\partial n}\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill \\ {}\hfill \frac{\partial f\left(\widehat{\beta},{\psi}_{16}\right)}{\partial \alpha}\hfill & \hfill \frac{\partial f\left(\widehat{\beta},{\psi}_{16}\right)}{\partial n}\hfill \end{array}\right] $$
(10)
where \( \frac{\partial f\left(\widehat{\beta},{\uppsi}_i\right)}{\partial \alpha } \) and \( \frac{\partial f\left(\widehat{\beta},{\uppsi}_i\right)}{\partial n} \) can be calculated by the following formulae:
$$ \frac{\partial f\left(\widehat{\beta},{\psi}_i\right)}{\partial \alpha }=\left(f\left(\left(\widehat{\alpha}+\varDelta \widehat{\alpha}\right),\widehat{n},{\psi}_i\right)-f\left(\left(\widehat{\alpha}-\varDelta \widehat{\alpha}\right),\widehat{n},{\psi}_i\right)\right)/\left(2\varDelta \widehat{\alpha}\right) $$
(11)
$$ \frac{\partial f\left(\widehat{\beta},{\psi}_i\right)}{\partial n}=\left(f\left(\widehat{\alpha},\left(\widehat{n}+\varDelta \widehat{n}\right),{\psi}_i\right)-f\left(\widehat{\alpha},\left(\widehat{n}-\varDelta \widehat{n}\right),{\psi}_i\right)\right)/\left(2\varDelta \widehat{n}\right) $$
(12)
where Δ = 0.015, \( \widehat{\alpha} \) and \( \widehat{n} \) are optimized value of α and n, respectively.
The estimated asymptotic covariance matrix (V) of the estimated parameters can be obtained by (Fox and Weisberg 2010):
$$ V=\left[\begin{array}{cc}\hfill {\delta}_{\alpha \alpha}^2\hfill & \hfill {\delta}_{\alpha n}^2\hfill \\ {}\hfill {\delta}_{n\alpha}^2\hfill & \hfill {\delta}_{nn}^2\hfill \end{array}\right]={\sigma}^2{\left({\boldsymbol{F}}^{\boldsymbol{\hbox{'}}}\boldsymbol{F}\right)}^{-1} $$
(13)
where (F
′
F)− 1 is the inverse of F
′
F, and F
′ is a transpose of F.
The σ
2 can be approximated by dividing the SSE by the degree of freedom, df, as in the form (Brown 2001):
$$ {\sigma}^2=\frac{SSE}{df} $$
(14)
where df is calculated as the number of observations in the sample minus the number of parameters. In this study, df equals 14 (i.e., 16 minus 2).
Therefore, \( {\delta}_{\alpha \alpha}^2 \), \( {\delta}_{nn}^2 \), \( {\delta}_{\alpha n}^2 \) ( or \( {\delta}_{n\alpha}^2 \)) in Eq. (13) are the estimated variance of α, variance of n, and covariance of α and n, respectively. Specifically, δ
αα
and δ
nn
are the standard errors used to characterize the uncertainties of α and n, respectively. At 95% confidence, the intervals of α and n are \( \widehat{\alpha} \) ±1.96 δ
αα
, \( \widehat{n} \) ±1.96 δ
nn
, respectively. SigmaPlot 10.0 is used to estimate the parameters and associated standard errors.
Monte Carlo method to estimate parameter uncertainty
Monte Carlo method is an analytical technique for solving a problem by performing a large number of simulations and inferring a solution from the collective results of the simulations. It is a method to calculate the probability distribution of possible outcomes.
In this paper, Monte Carlo simulation is performed to obtain residues of dependent variable θ. The residues follow a specified distribution with a mean of zero and standard deviation of \( \sqrt{SSE/df} \). The simulated residues are added to the predicted θ (\( \widehat{\theta} \)) to reconstruct new observations for dependent variable θ. The expression for obtaining new observations for dependent variable θ in Excel is:
$$ \theta =\widehat{\theta}+\mathrm{NORM}.\mathrm{I}\mathrm{N}\mathrm{V}\left(\mathrm{RAND}\left(\right),0,\mathrm{SQRT}\left(\frac{SSE}{df}\right)\right) $$
(15)
where function NORM.INV gives a value which follow a normal distribution with a mean of zero and standard deviation of \( \sqrt{SSE/df} \) at a probability of RAND(). Therefore, normal distribution on the θ is assumed for Monte Carlo method. Excel function RAND produces a random value that is greater than or equal to 0 and less than 1. SQRT is a function to obtain the square root of a variable.
Monte Carlo simulations are performed 2000 times. Nonlinear regression is made on the simulated θ values versus |ψ| to obtain 2000 values for parameters α and n. The fitted values with different numbers (from 100 to 2000 with intervals of 100) of simulation is analyzed separately to determine the influences of number of simulation on uncertainty estimates. For each dataset, the probability distribution of α and n will be determined by the Shapiro-Wilk test using SPSS 16.0, and the 95% confidence intervals of α and n will be calculated to represent their uncertainties. For simplification, only 200 simulations are shown as an example. Readers can run different numbers of simulation by analogy.
Bootstrap method to estimate parameter uncertainty
Bootstrap method is an alternative method first introduced by Efron (1979) for determining uncertainty in any statistic caused by sampling error. The main idea of this method is to resample with replacement from the sample data at hand and create a large number of “phantom samples” known as bootstrap samples (Singh and Xie 2013). Bootstrap method is a nonparametric method which requires no assumptions about the data distribution.
Residues of θ are calculated by subtracting the \( \widehat{\theta} \) from the original θ measurements. Bootstrap method is used to resample the residues with replacement for each θ from the calculated residues. The re-sampled residues are added to the \( \widehat{\theta} \) to reconstruct new observations for dependent variable θ. The expression for obtaining new observations for dependent variable θ using bootstrap method in Excel is:
$$ \theta =\widehat{\theta}+\mathrm{INDEX}\Big(\mathrm{Range}\ \mathrm{of}\ \mathrm{residual},\mathrm{I}\mathrm{N}\mathrm{T}\left(\mathrm{RAND}\left(\right)*\mathrm{Row}\ \mathrm{number}\right) $$
(16)
where function INDEX is used to randomly return a calculated residual from a certain array. Range of residual refers to the calculated residues. INT is a function to round a given number, which is randomly produced by RAND() multiplied by row number.
The non-parametric bootstrap method is a special case of Monte Carlo method used for obtaining the distribution of residues of θ which can be representative of the population. The idea behind the bootstrap method is that the calculated residues can be an estimate of the population, so the distribution of the residues can be obtained by drawing many samples with replacement from the calculated residues. For the Monte Carlo method, however, it creates the distribution of residues of θ with a theoretical (i.e., normal) distribution. From this aspect, the bootstrap method is more empirically based and the Monte Carlo method is more theoretically based.
Similar to the Monte Carlo method, bootstrap simulations are performed 2000 times. Distribution type and 95% confidence intervals of α and n will also be determined for fitted datasets with different numbers (from 100 to 2000 with intervals of 100) of simulation. For simplification, only 200 simulations are shown as an example.