The aim of this paper is to study the tail distribution of the CEV model driven by Brownian motion and fractional Brownian motion. Based on the techniques of Malliavin calculus and a result established recently in [1], we obtain an explicit estimate for tail distributions.
It is well known that the CEV model is one of very popular models in finance. The dynamic of this model is described by the following Itô stochastic differential equation
The solution \((X_t)_{0\leq t\leq T}\) to the model (1) is a Markov process without memory. However, in the last few decades, there are many observations showing that an asset price or an interest rate is not always a Markov process since it has long-range aftereffects. Many studies have pointed out that the dynamics driven by fractional Brownian motion are a suitable choice to model such objects, see [2] and the references therein. Hence, it is important to take into account the effect of fractional noise to the model (1). We recall that a fractional Brownian motion (fBm) of Hurst parameter \(H\in (0,1)\) is a centered Gaussian process \(B^H=(B^H_t)_{0\leq t\leq T}\) with covariance function
\[R_H(t,s):=E\left[B^H_t B^H_s\right]=\frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2H}\right).\] For \(H>1/2\), \(B^H_t\) admits the so-called Volterra representation (see [3] pp. 277-279)In this paper, we consider the mixed-fractional CEV model that is defined as the stochastic differential equations of the form
Recently, the applications in finance of the mixed-fractional CEV model have been extensively discussed, see [5] and references therein. In the present paper, our aim is to study the tail distribution of solutions to (3). This problem is important because the probability distribution function is one of the most natural features for any random variable. In fact, in the last decade, the tail distribution estimates for various random variables have been investigated by many authors, see e.g. [1,6,7] and references therein. In the present paper, we will focus on providing explicit estimates for the probability distribution of \(X_t,\) see Theorem 1 below.
The volatility coefficient of the model (3) violates the Lipschitz continuous condition which is traditionally imposed in the study of stochastic differential equations. This causes some mathematical difficulties which make the study of the model (3) particularly interesting. In order to be able to handle such difficulties, our tools are the techniques of Malliavin calculus and a result established recently in [1].
The rest of the paper is organized as follows: In §2, we recall some fundamental concepts of Malliavin calculus. The main results of the paper are stated and proved in §3.
Lemma 1. Let \(Z\) be a centered random variable in \(\mathbb{D}^{1,2}.\) Assume there exists a non-random constant \(L\) such that
Proof. The proof is similar to that of Lemma 2.2 in [1]. By Clark-Ocone formula we have \[Z=\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dW_r.\] Hence, for any \(\lambda\in \mathbb{R},\) we obtain \begin{align*}Ee^{\lambda Z}=&E\exp\left(\lambda\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dW_r\right)\\ =&E\exp\left(\lambda\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r-\frac{\lambda^2}{2}\int_0^T \left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2dr+\frac{\lambda^2}{2}\int_0^T \left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2dr\right)\\ &\times E\exp\left(\lambda\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dB_r-\frac{\lambda^2}{2}\int_0^T \left(E\left[D^W_rZ|\mathcal{F}_r\right]\right)^2dr+\frac{\lambda^2}{2}\int_0^T \left(E\left[D^W_rZ|\mathcal{F}_r\right]\right)^2dr\right)\\ \leq& e^{\frac{\lambda^2}{2}M^2}EN_T, \end{align*} where \((N_t)_{t\in[0,T]}\) is a stochastic process defined by \[N_t:=\exp\left(\lambda\int_0^t E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda\int_0^t E\left[D^W_rZ|\mathcal{F}_r\right]dW_r-\frac{\lambda^2}{2}\int_0^t \left(\left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2+\left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2\right)dr\right).\] By using Itô formula, we obtain \[N_T=1+\lambda \int_0^TN_rE\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda \int_0^TN_rE\left[D^W_rZ|\mathcal{F}_r\right]dW_r,\] which implies that \(EN_T=1.\) Thus we get \[Ee^{\lambda Z}\leq e^{\frac{\lambda^2}{2}L^2}EN_T= e^{\frac{\lambda^2}{2}L^2}.\] This, together with Markov’s inequality, gives us \[P\left(Z\geq x\right)=P\left(e^{\lambda Z}\geq e^{\lambda x}\right)\leq e^{\frac{\lambda^2}{2}L^2-\lambda x},\,\,\lambda>0,x\in \mathbb{R}.\] When \(x>0,\) we choose \(\lambda=x/L^2,\) and we get \[P\left(Z\geq x\right)\leq e^{-\frac{x^2}{2L^2}},\quad x>0.\] So we can finish the proof of Lemma.
Lemma 2. We have
Proof. We have \[ g'(x)=-a\alpha x^{\frac{-1}{1-\alpha}} -b(1-\alpha)+\frac{\alpha(1-\alpha)\sigma^2}{2x^2} \] and \begin{align*} g”(x)&=x^{\frac{-1}{1-\alpha}-1}\left(\frac{a\alpha}{1-\alpha}-\alpha(1-\alpha)\sigma^2x^{\frac{1}{1-\alpha}-2}\right). \end{align*} We note that \(\frac{1}{2}< \alpha0\). Hence, it is easy to see that \(g”(x_0)=0\) and \(\sup\limits_{x>0}g'(x)=g'(x_0).\) We thus obtain the relation (7).
Proposition 1. The Equation (6) admits a unique positive solution. Moreover, \(V_t>0\mbox{ } a.s.\) for any \(t\geq 0.\)
Proof. We observe that the function \(g(x)=(1-\alpha)\left(ax^{\frac{-\alpha}{1-\alpha}}-bx-\frac{\alpha\sigma^2}{2x}\right)\) is Lipschitz continuous on the neighborhood of \(V_0>0.\) Hence, there exists a local solution \(V_t\) on the interval \([0,\tau],\) where \(\tau\) is the stopping time such that \(\tau=\inf\left\{t>0:V_t=0\right\}.\) Assume that \(\tau< \infty.\)
For all \(t\in [0,\tau),\) we have
The uniqueness of the solutions can be verified as follows. Let \(V_t\) and \(V_t^*\) be two solutions of (6) with the same initial condition \(V_0.\) We have
\[V_t-V_t^*=\int_0^t \left[g(V_s)-g(V_s^*)\right] ds,\,\,\,0\leq t\leq T,\] and hence, \[ \left(V_t-V_t^*\right)^2=2\int_0^t\left(V_s-V_s^*\right)\left[g(V_s)-g(V_s^*)\right] ds,\,\,\,t\geq 0.\] By using Lagrange’s theorem, there exists a random variable \(\theta \) lying between 0 and 1 such that \[ \left(V_t-V_t^*\right)^2=2\int_0^t g’\left(V_s+\theta (V_s^*-V_s)\right)\left(V_s-V_s^*\right)^2 ds,\,\,\,t\geq 0.\] By Lemma 2, we deduce \[ \left(V_t-V_t^*\right)^2\le 2M\int_0^t \left(V_s-V_s^*\right)^2 ds\le \varepsilon +2M\int_0^t \left(V_s-V_s^*\right)^2 ds,\mbox{ }\forall \varepsilon >0.\] We use Gronwall’s lemma to get \[ \left(V_t-V_t^*\right)^2\le \varepsilon e^{2Mt}\le \varepsilon e^{2MT},\mbox{ }\forall t\geq 0, \mbox{ }\forall \varepsilon >0.\] The right hand converges to \(0 \) as \(\varepsilon\to 0,\) hence, \(V_t=V_t^*,\mbox{ }\forall t\in [0,T].\) The proof of Proposition is complete.Proposition 2. The Equation (3) has a unique solution given by \(X_t=V_t^{\frac{1}{1-\alpha}},\,\,0\leq t\leq T,\) where \(V_t\) is the solution of (6).
Proof. The proof is similar to that of Theorem 2.1 in [8]. So we omit it.
Next, we will prove the solution \(V_t\) of (6) is Malliavin differentiable. By Volterra expression of fBm, we can rewrite (6) by the following equationProposition 3. Let \((V_t)_{0\leq t\leq T}\) be the solution of the Equation (6). Then, for each \(t\in(0,T],\) the random variable \(V_t\) is Malliavin differentiable. Moreover, we have \begin{align*} D_s^B V_t &=\sigma(1-\alpha)\exp\left(\int_s^tg'(V_r)dr\right) \mathbb{I}_{[0,t]}(s)\\ D_s^W V_t &=\sigma_H(1-\alpha)\int_s^tK_1(v,s)\exp\left(\int_v^tg'(V_r)dr\right)dv\mathbb{I}_{[0,t]}(s) \end{align*} where \( K_1(v,s) = \frac{\partial}{\partial v}K(v,s)= c_{H}(v-s)^{H- \frac{3}{2}}v^{H-\frac{1}{2}} r^{\frac{1}{2}-H}.\)
Proof. Fix \(t\in(0,T].\) Let us compute the directional derivative \(\langle D^BV_t,h\rangle_{L^2[0,T]}\) with \(h\in L^2[0,T]:\) \[\langle D^BV_t,h\rangle_{L^2[0,T]} = \frac{dV^\varepsilon_t}{d\varepsilon}|_{\varepsilon =0},\] where \(V^\varepsilon_t\) solves the following equation \[ V_t^{\varepsilon } =V_0+\int_0^t g\left(V_s^{\varepsilon }\right)ds+\sigma(1-\alpha)\left(B_t+\varepsilon\int_0^t h_sds\right)+\sigma_H(1-\alpha)dB_t^H, t\in[0,T],\varepsilon\in(0,1).\] By using Lagrange’s theorem, we get
Theorem 1. Let \((X_t)_{0\leq t\leq T}\) be the unique solution of the Equation (3). Then, for each \(t\in(0,T],\) the tail distribution of \(X_t\) satisfies \[ P(X_t\ge x)\leq\exp\left(-\frac{\left(x^{1-\alpha}-\mu_t^{1-\alpha}\right)^2}{2\left(\frac{\sigma^2\left(1-\alpha\right)^2}{2M}\left(e^{2Mt} -1\right) +\sigma_H^2\left(1-\alpha\right)^2e^{2Mt}t^{2H}\right)}\right),\,\,\,x> \mu_t,\] where \(\mu_t:=E[X_t]\) and \(M\) is defined by (7).
Proof. Recalling Proposition 3 we get \begin{align*} 0&\leq D_r^B V_t\le\sigma(1-\alpha)e^{M(t-r)},\\ 0&\leq D_r^W V_t = \sigma_H(1-\alpha)\int_r^tK_1(v,r)e^{\left(\int_v^tg'(V_r)dr\right)}dv\\ &\le \sigma_H(1-\alpha)\left( K(t,r)+ M\int_r^t K(v,r)e^{M(t-v)}dv\right) , \mbox{ }0\le r\le t\le T. % & = \sigma_H(1-\alpha) \end{align*} Because the function \(v \rightarrow K(v,r)\) is non-decreasing, this implies \begin{align*} D_r^W V_t&\le \sigma_H(1-\alpha)\left( K(t,r)+ MK(t,r)\int_r^t e^{M(t-v)}dv\right)\\ &=\sigma_H(1-\alpha)K(t,r)e^{M(t-r)}, \mbox{ }0\le r\le t\le T. \end{align*} We have \begin{align*} \int_0^T\left(E\left[D_r^BV_t|\mathcal{F}_s\right]\right)^2dr &= \int_0^t\left(E\left[D_r^BV_t|\mathcal{F}_s\right]\right)^2dr\\&\le \int_0^t\left(\sigma(1-\alpha)e^{M(t-r)}\right)^2dr\\ & = \frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) \end{align*} and \begin{align*} \int_0^T\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr &= \int_0^t\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr\\ &\le \int_0^t\left(\sigma_H\left(1-\alpha\right)K(t,r)e^{M(t-r)}\right)^2dr \\&= \sigma_H^2(1-\alpha)^2\int_0^tK^2(t,r)e^{2M(t-r)}dr\\ & \le \sigma_H^2\left(1-\alpha\right)^2e^{2Mt}\int_0^tK^2(t,r)dr, \mbox{ }0\le r\le t\le T. \end{align*} Since \(\int_0^tK^2(t,s)ds=E|B_t^H|^2=t^{2H}\) we have \[ \int_0^T\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr \le \sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}. \] Fixed \(t\in (0,T], \) put \(F=V_t-E[V_t]\) then \(EF=0\) and \(D_s^BF=D_s^BV_t,D_s^WF=D_s^WV_t\). We obtain the following estimate \begin{align*} \int_0^T\left(E\left[D_s^BF|\mathcal{F}_s\right]\right)^2ds + \int_0^T\left(E\left[D_s^WF|\mathcal{F}_s\right]\right)^2ds&=\int_0^T\left(E\left[D_s^BV_t|\mathcal{F}_s\right]\right)^2ds + \int_0^T\left(E\left[D_s^WV_t|\mathcal{F}_s\right]\right)^2ds\\ & \le \frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) + \sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}. \end{align*} We observe that, by Lyapunov’s inequality, \(E\left[X_t^{1-\alpha}\right]\leq \left(E\left[X_t\right]\right)^{1-\alpha}=\mu_t^{1-\alpha}.\) Hence, by applying Lemma 1 to \(F,\) we obtain \begin{align*} P(X_t\ge x)&=P\left(V_t\ge x^{1-\alpha}\right)\\ &=P\left(V_t-E\left[V_t\right]\ge x^{1-\alpha}-E\left[V_t\right] \right)\\ &=P\left(F\ge x^{1-\alpha}-E\left[X_t^{1-\alpha}\right]\right)\\ &\leq P\left(F\ge x^{1-\alpha}-\mu_t^{1-\alpha}\right)\\ &\le \exp\left(-\frac{\left(x^{1-\alpha}-E\left[X_t^{1-\alpha}\right]\right)^2}{2\left(\frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) +\sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}\right)}\right),\,\,x>\mu_t. \end{align*} The proof of Theorem is complete.
Remark 1. In [5], Araneda obtained an analytical expression for the transition probability density function of solutions to the Equation (3). However, the stochastic integral with respect to \(B^H\) considered there is interpreted as a Wick-Itô integral. This integral is different from the pathwise Stieltjes integral using in our work (the relation between two integrals can be found in §5.6 of [10]). In particular, unlike the Wick-Itô integral, the pathwise Stieltjes integral has non-zero expectation. We therefore think that it is not easy to extend the method developed in [5] to the setting of pathwise Stieltjes integrals. That is why we have to employ a different method to investigate the tail distributions as in Theorem 1.
Remark 2. The transition probability density and tail distribution can be used to compute the price of options. In the setting of the mixed-fractional CEV model using pathwise Stieltjes integrals, to the best of our knowledge, the option pricing formula is still an open problem. Solving this problem is beyond the scope of the present paper. However, if such a formula exists then the tail distribution estimates obtained in Theorem 1 will be useful to provide an upper bound for the price of options.