Open Journal of Mathematical Sciences

Tail distribution estimates of the mixed-fractional CEV model

Nguyen Thu Hang\(^1\), Pham Thi Phuong Thuy
Department of Mathematics, Hanoi University of Mining and Geology, 18 Pho Vien, Bac Tu Liem, Hanoi, 084 Vietnam.; (N.T.H)
The faculty of Basic Sciences, Vietnam Air Defence and Air Force Academy, Son Tay, Ha Noi, 084 Vietnam.; (P.T.P.T)
\(^{1}\)Corresponding Author: thuhangmdc@gmail.com

Abstract

The aim of this paper is to study the tail distribution of the CEV model driven by Brownian motion and fractional Brownian motion. Based on the techniques of Malliavin calculus and a result established recently in [1], we obtain an explicit estimate for tail distributions.

Keywords:

CEV model; Fractional brownian motion; Malliavin calculus.

1. Introduction

It is well known that the CEV model is one of very popular models in finance. The dynamic of this model is described by the following Itô stochastic differential equation

\begin{align}\label{e3sq1} X_t=X_0+\int_0^t (a-bX_s)ds+\int_0^t\sigma X_s^{\alpha}dB_s,\,\,0\leq t\leq T, \end{align}
(1)
where \(X_0,a,b,\sigma\) are positive constants, \(\alpha\in (\frac{1}{2},1)\) and \(B=(B_t)_{0\leq t\leq T}\) is a standard Brownian motion.

The solution \((X_t)_{0\leq t\leq T}\) to the model (1) is a Markov process without memory. However, in the last few decades, there are many observations showing that an asset price or an interest rate is not always a Markov process since it has long-range aftereffects. Many studies have pointed out that the dynamics driven by fractional Brownian motion are a suitable choice to model such objects, see [2] and the references therein. Hence, it is important to take into account the effect of fractional noise to the model (1). We recall that a fractional Brownian motion (fBm) of Hurst parameter \(H\in (0,1)\) is a centered Gaussian process \(B^H=(B^H_t)_{0\leq t\leq T}\) with covariance function

\[R_H(t,s):=E\left[B^H_t B^H_s\right]=\frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2H}\right).\] For \(H>1/2\), \(B^H_t\) admits the so-called Volterra representation (see [3] pp. 277-279)
\begin{equation}\label{densityCIR02} B^H_t=\int_0^t K(t,s)d W_s, \end{equation}
(2)
where \((W_t)_{t\geq 0}\) is a standard Brownian motion, \[ K(t,s):=c_H\,s^{\frac{1}{2}-H}\int_s^t (u-s)^{H-\frac{3}{2}}u^{H-\frac{1}{2}}du,\quad\text{\(s\leq t\)} \] and \( c_H=\sqrt{\frac{H(2H-1)}{\beta(2-2H,H-\frac{1}{2})}},\) \(\text{where \(\beta\) is the Beta function.}\)

In this paper, we consider the mixed-fractional CEV model that is defined as the stochastic differential equations of the form

\begin{align}\label{eq1} X_t=X_0+\int_0^t (a-bX_s)ds+\int_0^t \sigma X_s^{\alpha}dB_s+\int_0^t \sigma_H X_s^{\alpha}dB_s^H,\,\,0\leq t\leq T, \end{align}
(3)
where the initial condition \(X_0\) and \(a,b,\sigma,\sigma_H\) are positive constants, \(\frac{1}{2}< \alpha< 1,\mbox{ and } B_t^H \) is fBm with \(H>\frac{1}{2}.\) The stochastic integral with respect to \(B\) is the Itô integral. Meanwhile, the stochastic integral with respect to \(B^H\) is interpreted as a pathwise Stieltjes integral, which has been frequently used in the studies related to fBm. We refer readers to Zähle's paper [4] for a detailed presentation of this integral.

Recently, the applications in finance of the mixed-fractional CEV model have been extensively discussed, see [5] and references therein. In the present paper, our aim is to study the tail distribution of solutions to (3). This problem is important because the probability distribution function is one of the most natural features for any random variable. In fact, in the last decade, the tail distribution estimates for various random variables have been investigated by many authors, see e.g. [1,6,7] and references therein. In the present paper, we will focus on providing explicit estimates for the probability distribution of \(X_t,\) see Theorem 1 below.

The volatility coefficient of the model (3) violates the Lipschitz continuous condition which is traditionally imposed in the study of stochastic differential equations. This causes some mathematical difficulties which make the study of the model (3) particularly interesting. In order to be able to handle such difficulties, our tools are the techniques of Malliavin calculus and a result established recently in [1].

The rest of the paper is organized as follows: In §2, we recall some fundamental concepts of Malliavin calculus. The main results of the paper are stated and proved in §3.

2. Preliminaries

This paper is strongly based on techniques of Malliavin calculus. For the reader's convenience, let us recall some elements of Malliavin calculus. We refer to [3] for a more complete treatment of this topic. We assume that two-dimensional Browian motion \(w=(B,W)\) is defined in a complete probability space \((\Omega ,\mathcal{F},P)\) and the \(\sigma \)-field \(\mathcal{F}\) is generated by \(w\). Let us denote by \(H\) the Hilbert space \(L^{2}([0,T];\mathbb{R}^{2}),\) and for any function \(h=\left(h^B,h^W\right)\in H\) we set \begin{equation*} w(h)=\int_{0}^Th^{B}(t)dB_{t}+\int_{0}^Th^{W}(t)dW_{t}. \end{equation*} Let \(\mathcal{S}\) be the class of smooth and cylindrical random variables of the form \begin{equation*} F=f(w(h_{1}),\ldots ,w(h_{n})), \end{equation*} where \(n\geq 1\), \(h_{1},\ldots ,h_{n}\in H\), and \(f\) is an infinitely differentiable function such that together with all its partial derivatives have at most polynomial growth order. The derivative operator of the random variable \(F\) is defined as \begin{align*} &D_{t}^{B}F=\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(w(h_{1}),\ldots,w(h_{n}))h_{j}^{B}(t),\\ &D_{t}^{W}F=\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(w(h_{1}),\ldots,w(h_{n}))h_{j}^{W}(t), \end{align*} where \(t\in [0,T]\). In this way, we interpret \(DF\) as a random variable with values in the Hilbert space \(H\). The derivative is a closable operator on \(L^{2}(\Omega )\) with values in \(L^{2}(\Omega ;H)\). We denote by \(\mathbb{D}^{1,2}\) the Hilbert spaced defined as the completion of \(\mathcal{S}\) with respect to the scalar product \begin{equation*} \left\langle F,G\right\rangle _{1,2}=E[FG]+E\left[\int_{0}^TD_{t}^{B}FD_{t}^{B}Gdt+\int_{0}^TD_{t}^{W}FD_{t}^{W}Gdt\right] . \end{equation*} A random variable \(F\) is said to be Malliavin differentiable if it belongs to \(\mathbb{D}^{1,2}.\) We have the following general estimate for tail probabilities.

Lemma 1. Let \(Z\) be a centered random variable in \(\mathbb{D}^{1,2}.\) Assume there exists a non-random constant \(L\) such that

\begin{equation}\label{lowup02ji} \int_0^T \left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2dr+\int_0^T \left(E\left[D^W_rZ|\mathcal{F}_r\right]\right)^2dr\leq L^2\,\,a.s. \end{equation}
(4)
Then following estimate for tail probabilities holds
\begin{equation}\label{lowup01} P\left(Z\geq x\right)\leq e^{-\frac{x^2}{2L^2}},\quad x>0. \end{equation}
(5)

Proof. The proof is similar to that of Lemma 2.2 in [1]. By Clark-Ocone formula we have \[Z=\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dW_r.\] Hence, for any \(\lambda\in \mathbb{R},\) we obtain \begin{align*}Ee^{\lambda Z}=&E\exp\left(\lambda\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dW_r\right)\\ =&E\exp\left(\lambda\int_0^T E\left[D^B_rZ|\mathcal{F}_r\right]dB_r-\frac{\lambda^2}{2}\int_0^T \left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2dr+\frac{\lambda^2}{2}\int_0^T \left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2dr\right)\\ &\times E\exp\left(\lambda\int_0^T E\left[D^W_rZ|\mathcal{F}_r\right]dB_r-\frac{\lambda^2}{2}\int_0^T \left(E\left[D^W_rZ|\mathcal{F}_r\right]\right)^2dr+\frac{\lambda^2}{2}\int_0^T \left(E\left[D^W_rZ|\mathcal{F}_r\right]\right)^2dr\right)\\ \leq& e^{\frac{\lambda^2}{2}M^2}EN_T, \end{align*} where \((N_t)_{t\in[0,T]}\) is a stochastic process defined by \[N_t:=\exp\left(\lambda\int_0^t E\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda\int_0^t E\left[D^W_rZ|\mathcal{F}_r\right]dW_r-\frac{\lambda^2}{2}\int_0^t \left(\left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2+\left(E\left[D^B_rZ|\mathcal{F}_r\right]\right)^2\right)dr\right).\] By using Itô formula, we obtain \[N_T=1+\lambda \int_0^TN_rE\left[D^B_rZ|\mathcal{F}_r\right]dB_r+\lambda \int_0^TN_rE\left[D^W_rZ|\mathcal{F}_r\right]dW_r,\] which implies that \(EN_T=1.\) Thus we get \[Ee^{\lambda Z}\leq e^{\frac{\lambda^2}{2}L^2}EN_T= e^{\frac{\lambda^2}{2}L^2}.\] This, together with Markov's inequality, gives us \[P\left(Z\geq x\right)=P\left(e^{\lambda Z}\geq e^{\lambda x}\right)\leq e^{\frac{\lambda^2}{2}L^2-\lambda x},\,\,\lambda>0,x\in \mathbb{R}.\] When \(x>0,\) we choose \(\lambda=x/L^2,\) and we get \[P\left(Z\geq x\right)\leq e^{-\frac{x^2}{2L^2}},\quad x>0.\] So we can finish the proof of Lemma.

3. The main results

We first show that the equation (3) has a unique solution. Following the method used in [8], we consider the following equation
\begin{equation}\label{eq2} dV_t=(1-\alpha)\left(aV_t^{\frac{-\alpha}{1-\alpha}}-bV_t-\frac{\alpha\sigma^2}{2V_t}\right)dt+\sigma(1-\alpha)dB_t+\sigma_H(1-\alpha)dB_t^H,\,\,\,t\geq 0, \end{equation}
(6)
the initial value \(V_0:=X_0^{1-\alpha}> 0.\) We put \[g(x)=(1-\alpha)\left(ax^{\frac{-\alpha}{1-\alpha}}-bx-\frac{\alpha\sigma^2}{2x}\right),\,\,\,x>0,\] and rewrite the Equation (6) as follows \[ V_t=V_0+\int_0^tg(V_s)ds+\sigma(1-\alpha)B_t+\sigma_H(1-\alpha)B_t^H,\,\,\,t\geq 0.\]

Lemma 2. We have

\begin{equation}\label{ilf5} M:=\sup\limits_{x>0}g'(x)=\frac{a\alpha(2\alpha-1)}{2(1-\alpha)}x_0^{\frac{-1}{1-\alpha}}-b(1-\alpha), \end{equation}
(7)
where \(x_0\in (0,\infty)\) such that \( x_0^{\frac{1}{1-\alpha}-2}=\frac{a}{(1-\alpha)^2\sigma^2}.\)

Proof. We have \[ g'(x)=-a\alpha x^{\frac{-1}{1-\alpha}} -b(1-\alpha)+\frac{\alpha(1-\alpha)\sigma^2}{2x^2} \] and \begin{align*} g''(x)&=x^{\frac{-1}{1-\alpha}-1}\left(\frac{a\alpha}{1-\alpha}-\alpha(1-\alpha)\sigma^2x^{\frac{1}{1-\alpha}-2}\right). \end{align*} We note that \(\frac{1}{2}< \alpha< 1\) and so \(\frac{1}{1-\alpha}-2>0\). Hence, it is easy to see that \(g''(x_0)=0\) and \(\sup\limits_{x>0}g'(x)=g'(x_0).\) We thus obtain the relation (7).

Proposition 1. The Equation (6) admits a unique positive solution. Moreover, \(V_t>0\mbox{ } a.s.\) for any \(t\geq 0.\)

Proof. We observe that the function \(g(x)=(1-\alpha)\left(ax^{\frac{-\alpha}{1-\alpha}}-bx-\frac{\alpha\sigma^2}{2x}\right)\) is Lipschitz continuous on the neighborhood of \(V_0>0.\) Hence, there exists a local solution \(V_t\) on the interval \([0,\tau],\) where \(\tau\) is the stopping time such that \(\tau=\inf\left\{t>0:V_t=0\right\}.\) Assume that \(\tau< \infty.\)

For all \(t\in [0,\tau),\) we have

\begin{align}\label{ct1} 0=V_{\tau}=V_t+\int_t^{\tau}g(V_s)ds+\sigma(1-\alpha)(B_{\tau}-B_t)+\sigma_H(1-\alpha)\left(B_{\tau}^H-B_t^H\right). \end{align}
(8)
We note that \[g(x)x^{\frac{\alpha}{1-\alpha}}=(1-\alpha)\left(a-bx^{\frac{1}{1-\alpha}}-\frac{\alpha}{2}\sigma^2x^{\frac{2\alpha-1}{1-\alpha}}\right).\] Because \(\frac{1}{2}< \alpha< 1\mbox{ we have }\frac{1}{1-\alpha}>0 \mbox{ and }\frac{2\alpha-1}{1-\alpha}>0.\) Therefore, \[\lim\limits_{x\to 0^+}g(x)x^{\frac{\alpha}{1-\alpha}}=a(1-\alpha)>0.\] Hence, there exists \(\varepsilon >0\) such that \[ g(x)>\frac{a(1-\alpha)}{2x^{\frac{\alpha}{1-\alpha}}},\mbox{ }\forall x\in (0,\varepsilon). \] Since \(V_t\) is continuous, and \(V_{\tau}=0,\) there exists \(t_0\) such that \(V_t\in (0,\varepsilon), \mbox{ }\forall t\in [t_0,\tau) \) which implies that
\begin{align}\label{ct2} g(V_t)> \frac{a(1-\alpha)}{2V_t^{\frac{\alpha}{1-\alpha}}},\mbox{ }\forall t\in [t_0,\tau). \end{align}
(9)
Recall that the paths of Brownian motion are \(\beta\)-Hölder continuous for any \(\beta< \frac{1}{2}\) and the paths of fBm are \(\beta\)-Hölder continuous for any \(\beta< H.\) So, fixed \(\beta< \frac{1}{2}\) then there exists a finite random variable \(C_{\beta}(\omega )\) such that \[ \left|\sigma(1-\alpha)(B_{\tau}-B_t)+\sigma_H(1-\alpha)\left(B_{\tau}^H-B_t^H\right)\right|\le C_{\beta}(\omega)\left|\tau-t\right|^{\beta}. \] This, combined with (8), gives us \begin{align*} 0< V_t&=-\int_t^{\tau}g(V_s)ds-\sigma(1-\alpha)(B_{\tau}-B_t)-\sigma_H(1-\alpha)\left(B_{\tau}^H-B_t^H\right)\\ &< \left|\sigma(1-\alpha)(B_{\tau}-B_t)+\sigma_H(1-\alpha)\left(B_{\tau}^H-B_t^H\right)\right|\\ &< C_{\beta}(\omega)\left(\tau-t\right)^{\beta},\mbox{ }\forall t\in [t_0,\tau), \end{align*} and \[ 0\leq \int_t^{\tau}g(V_s)ds< C_{\beta}(\omega)(\tau-t)^{\beta},\mbox{ }\forall t\in [t_0,\tau). \] As a consequence, it follows from (9) that \[C_{\beta}(\omega)(\tau-t)^{\beta}>\int_t^{\tau}g(V_s)ds> \int_t^{\tau}\frac{a(1-\alpha)}{2V_s^{\frac{\alpha}{1-\alpha}}}ds>\int_t^{\tau} \frac{a(1-\alpha)}{2\left[C_{\beta}(\omega)(\tau-s)^{\beta}\right]^{\frac{\alpha}{1-\alpha}}}ds,\mbox{ }\forall t\in [t_0,\tau).\] Therefore, it holds that
\begin{align}\label{ct3} C_{\beta}(\omega)(\tau-t)^{\beta}> \frac{a(1-\alpha)}{2\left[C_{\beta}(\omega)\right]^{\frac{\alpha}{1-\alpha}}}(\tau-t)^{1-\frac{\alpha\beta}{1-\alpha}},\mbox{ }\forall t\in [t_0,\tau), \end{align}
(10)
or equivalently \[ 2\left[C_{\beta}(\omega)\right]^{\frac{1}{1-\alpha}}\frac{1}{a(1-\alpha)}>(\tau-t)^{1-\frac{\beta}{1-\alpha}},\mbox{ }\forall t\in [t_0,\tau). \] We choose \(\beta\) such that \(\frac{1}{2}>\beta>1-\alpha\) then \(1-\frac{\beta}{1-\alpha}< 0.\) We get a contradiction beacause the right hand side of (10) tends to \(\infty\) as \(t\to\tau.\) We conclude that \(\tau=\infty.\) Thus, the Equation (6) exists global solution with \(V_0>0.\)

The uniqueness of the solutions can be verified as follows. Let \(V_t\) and \(V_t^*\) be two solutions of (6) with the same initial condition \(V_0.\) We have

\[V_t-V_t^*=\int_0^t \left[g(V_s)-g(V_s^*)\right] ds,\,\,\,0\leq t\leq T,\] and hence, \[ \left(V_t-V_t^*\right)^2=2\int_0^t\left(V_s-V_s^*\right)\left[g(V_s)-g(V_s^*)\right] ds,\,\,\,t\geq 0.\] By using Lagrange's theorem, there exists a random variable \(\theta \) lying between 0 and 1 such that \[ \left(V_t-V_t^*\right)^2=2\int_0^t g'\left(V_s+\theta (V_s^*-V_s)\right)\left(V_s-V_s^*\right)^2 ds,\,\,\,t\geq 0.\] By Lemma 2, we deduce \[ \left(V_t-V_t^*\right)^2\le 2M\int_0^t \left(V_s-V_s^*\right)^2 ds\le \varepsilon +2M\int_0^t \left(V_s-V_s^*\right)^2 ds,\mbox{ }\forall \varepsilon >0.\] We use Gronwall's lemma to get \[ \left(V_t-V_t^*\right)^2\le \varepsilon e^{2Mt}\le \varepsilon e^{2MT},\mbox{ }\forall t\geq 0, \mbox{ }\forall \varepsilon >0.\] The right hand converges to \(0 \) as \(\varepsilon\to 0,\) hence, \(V_t=V_t^*,\mbox{ }\forall t\in [0,T].\) The proof of Proposition is complete.

Proposition 2. The Equation (3) has a unique solution given by \(X_t=V_t^{\frac{1}{1-\alpha}},\,\,0\leq t\leq T,\) where \(V_t\) is the solution of (6).

Proof. The proof is similar to that of Theorem 2.1 in [8]. So we omit it.

Next, we will prove the solution \(V_t\) of (6) is Malliavin differentiable. By Volterra expression of fBm, we can rewrite (6) by the following equation
\begin{align}\label{eq3} V_t=V_0+\int_0^tg(V_s)ds+\sigma(1-\alpha)B_t+\sigma_H(1-\alpha)\int_0^tK(t,s)dW_s. \end{align}
(11)

Proposition 3. Let \((V_t)_{0\leq t\leq T}\) be the solution of the Equation (6). Then, for each \(t\in(0,T],\) the random variable \(V_t\) is Malliavin differentiable. Moreover, we have \begin{align*} D_s^B V_t &=\sigma(1-\alpha)\exp\left(\int_s^tg'(V_r)dr\right) \mathbb{I}_{[0,t]}(s)\\ D_s^W V_t &=\sigma_H(1-\alpha)\int_s^tK_1(v,s)\exp\left(\int_v^tg'(V_r)dr\right)dv\mathbb{I}_{[0,t]}(s) \end{align*} where \( K_1(v,s) = \frac{\partial}{\partial v}K(v,s)= c_{H}(v-s)^{H- \frac{3}{2}}v^{H-\frac{1}{2}} r^{\frac{1}{2}-H}.\)

Proof. Fix \(t\in(0,T].\) Let us compute the directional derivative \(\langle D^BV_t,h\rangle_{L^2[0,T]}\) with \(h\in L^2[0,T]:\) \[\langle D^BV_t,h\rangle_{L^2[0,T]} = \frac{dV^\varepsilon_t}{d\varepsilon}|_{\varepsilon =0},\] where \(V^\varepsilon_t\) solves the following equation \[ V_t^{\varepsilon } =V_0+\int_0^t g\left(V_s^{\varepsilon }\right)ds+\sigma(1-\alpha)\left(B_t+\varepsilon\int_0^t h_sds\right)+\sigma_H(1-\alpha)dB_t^H, t\in[0,T],\varepsilon\in(0,1).\] By using Lagrange's theorem, we get

\begin{align}\label{eq4} V_t^{\varepsilon }-V_t=\int_0^tg'\left(V_s+\xi _s(V_s^{\varepsilon}-V_s)\right)(V_s^{\varepsilon}-V_s) ds+\sigma(1-\alpha)\varepsilon\int_0^th_sds \end{align}
(12)
for some random variables \(\xi_s\) lying between 0 and 1. The solution of (12) is given by \[ V_t^{\varepsilon }-V_t= \sigma(1-\alpha)\varepsilon\int_0^t h_s\left(\exp\int_s^tg'\left(V_r+\xi _r(V_r^{\varepsilon}-V_r)\right)dr\right)ds,\,\, t\in[0,T],\] which implies that \[ \frac{V_t^{\varepsilon }-V_t}{\varepsilon}=\sigma(1-\alpha)\int_0^t h_s\left(\exp\int_s^tg'\left(V_r+\xi _r(V_r^{\varepsilon}-V_r)\right)dr\right)ds.\] We recall that \(g'(x)\leq M< \infty,\mbox{ }\forall x>0.\) Hence, by the dominated convergence theorem, we obtain \begin{align*} \lim_{\varepsilon\to 0^+}\frac{V_t^{\varepsilon }-V_t}{\varepsilon}&= \sigma(1-\alpha)\int_0^t h_s\exp\left(\int_s^tg'(V_r)dr\right)ds\\ &=\sigma(1-\alpha)\int_0^T h_s\exp\left(\int_s^tg'(V_r)dr\right)\mathbb{I}_{[0,t]}ds\\ &= \Big < h,\sigma(1-\alpha)\exp\left(\int_s^tg'(V_r)dr\right)\mathbb{I}_{[0,t]}\Big >_{L^2[0,T]}, t\in[0,T], \end{align*} where the limit holds in \(L^2(\Omega ).\) According to the results of Sugita [9], we can conclude that \(V_t\) is Malliavin differentiable with respect to \(B\) and its derivative is given by \begin{align*} D_s^B V_t=&\sigma(1-\alpha)\exp\left(\int_s^tg'(V_r)dr\right)\mathbb{I}_{[0,t]}(s). \end{align*} In a same way, we compute the directional derivative \(\langle D^WV_t,h\rangle= \frac{dV^\theta_t}{d\varepsilon}|_{\theta =0}\), where \(V_t^{\theta }\) satisfies \begin{align*} V_t^{\theta } &=V_0+\int_0^t g\left(V_s^{\theta }\right)ds+\sigma(1-\alpha)B_t+\sigma_H(1-\alpha)\int _0^t K(t,s)d\left(W_s+\theta \int_0^s h_udu\right)\end{align*}\begin{align*} &=V_0+\int_0^t g\left(V_s^{\theta }\right)ds+\sigma(1-\alpha)B_t+\sigma_H(1-\alpha)\int _0^t K(t,s)\left(dW_s+\theta h_sds\right), t\in[0,T],\theta \in[0,1). \end{align*} Using Lagrange's theorem again, we have
\begin{align}\label{eq5} V_t^{\theta } -V_t= \int_0^tg'\left(V_s+\zeta _s\left(V_s^{\theta }-V_s\right)\right)\left(V_s^{\theta }-V_s\right) ds+ \sigma_H(1-\alpha)\theta \int_0^tK(t,s)h_sds, \end{align}
(13)
where \(\zeta_s \) is a random variable between 0 and 1. The solution of (13) is represented by \[ V_t^{\theta } -V_t=\theta\sigma_H(1-\alpha)\int_0^t \left(\int_0^sK_1(s,u)h_udu\right)\exp\left(\int_s^tg'\left(V_r+\zeta _r\left(V_r^{\theta }-V_r\right) \right)dr\right)ds. \] Hence, \begin{align*} \lim_{\theta\to 0^+}\frac{V_t^{\theta } -V_t}{\theta}&=\sigma_H(1-\alpha)\int_0^t \left(\int_0^sK_1(s,u)h_udu\right)\exp\left(\int_s^tg'\left(V_r \right)dr\right)ds.\\ &= \Big < h_s,\sigma_H(1-\alpha)\int_s^tK_1(v,s)\exp\left(\int_v^tg'(V_r)dr\right)dv\mathbb{I}_{[0,t]}(s)\Big >_{L^2[0,T]}. \end{align*} Thus \(V_t\) is Malliavin differentiable with respect to \(W\) and we have \begin{align*} D_s^W V_t&=\sigma_H(1-\alpha)\int_s^tK_1(v,s)\exp\left(\int_v^tg'(V_r)dr\right)dv\mathbb{I}_{[0,t]}(s). \end{align*} The proof of Proposition is complete.

We now are in a position to state and prove the main result of this paper.

Theorem 1. Let \((X_t)_{0\leq t\leq T}\) be the unique solution of the Equation (3). Then, for each \(t\in(0,T],\) the tail distribution of \(X_t\) satisfies \[ P(X_t\ge x)\leq\exp\left(-\frac{\left(x^{1-\alpha}-\mu_t^{1-\alpha}\right)^2}{2\left(\frac{\sigma^2\left(1-\alpha\right)^2}{2M}\left(e^{2Mt} -1\right) +\sigma_H^2\left(1-\alpha\right)^2e^{2Mt}t^{2H}\right)}\right),\,\,\,x> \mu_t,\] where \(\mu_t:=E[X_t]\) and \(M\) is defined by (7).

Proof. Recalling Proposition 3 we get \begin{align*} 0&\leq D_r^B V_t\le\sigma(1-\alpha)e^{M(t-r)},\\ 0&\leq D_r^W V_t = \sigma_H(1-\alpha)\int_r^tK_1(v,r)e^{\left(\int_v^tg'(V_r)dr\right)}dv\\ &\le \sigma_H(1-\alpha)\left( K(t,r)+ M\int_r^t K(v,r)e^{M(t-v)}dv\right) , \mbox{ }0\le r\le t\le T. % & = \sigma_H(1-\alpha) \end{align*} Because the function \(v \rightarrow K(v,r)\) is non-decreasing, this implies \begin{align*} D_r^W V_t&\le \sigma_H(1-\alpha)\left( K(t,r)+ MK(t,r)\int_r^t e^{M(t-v)}dv\right)\\ &=\sigma_H(1-\alpha)K(t,r)e^{M(t-r)}, \mbox{ }0\le r\le t\le T. \end{align*} We have \begin{align*} \int_0^T\left(E\left[D_r^BV_t|\mathcal{F}_s\right]\right)^2dr &= \int_0^t\left(E\left[D_r^BV_t|\mathcal{F}_s\right]\right)^2dr\\&\le \int_0^t\left(\sigma(1-\alpha)e^{M(t-r)}\right)^2dr\\ & = \frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) \end{align*} and \begin{align*} \int_0^T\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr &= \int_0^t\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr\\ &\le \int_0^t\left(\sigma_H\left(1-\alpha\right)K(t,r)e^{M(t-r)}\right)^2dr \\&= \sigma_H^2(1-\alpha)^2\int_0^tK^2(t,r)e^{2M(t-r)}dr\\ & \le \sigma_H^2\left(1-\alpha\right)^2e^{2Mt}\int_0^tK^2(t,r)dr, \mbox{ }0\le r\le t\le T. \end{align*} Since \(\int_0^tK^2(t,s)ds=E|B_t^H|^2=t^{2H}\) we have \[ \int_0^T\left(E\left[D_r^WV_t|\mathcal{F}_r\right]\right)^2dr \le \sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}. \] Fixed \(t\in (0,T], \) put \(F=V_t-E[V_t]\) then \(EF=0\) and \(D_s^BF=D_s^BV_t,D_s^WF=D_s^WV_t\). We obtain the following estimate \begin{align*} \int_0^T\left(E\left[D_s^BF|\mathcal{F}_s\right]\right)^2ds + \int_0^T\left(E\left[D_s^WF|\mathcal{F}_s\right]\right)^2ds&=\int_0^T\left(E\left[D_s^BV_t|\mathcal{F}_s\right]\right)^2ds + \int_0^T\left(E\left[D_s^WV_t|\mathcal{F}_s\right]\right)^2ds\\ & \le \frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) + \sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}. \end{align*} We observe that, by Lyapunov's inequality, \(E\left[X_t^{1-\alpha}\right]\leq \left(E\left[X_t\right]\right)^{1-\alpha}=\mu_t^{1-\alpha}.\) Hence, by applying Lemma 1 to \(F,\) we obtain \begin{align*} P(X_t\ge x)&=P\left(V_t\ge x^{1-\alpha}\right)\\ &=P\left(V_t-E\left[V_t\right]\ge x^{1-\alpha}-E\left[V_t\right] \right)\\ &=P\left(F\ge x^{1-\alpha}-E\left[X_t^{1-\alpha}\right]\right)\\ &\leq P\left(F\ge x^{1-\alpha}-\mu_t^{1-\alpha}\right)\\ &\le \exp\left(-\frac{\left(x^{1-\alpha}-E\left[X_t^{1-\alpha}\right]\right)^2}{2\left(\frac{\sigma^2(1-\alpha)^2}{2M}\left(e^{2Mt} -1\right) +\sigma_H^2(1-\alpha)^2e^{2Mt}t^{2H}\right)}\right),\,\,x>\mu_t. \end{align*} The proof of Theorem is complete.

Remark 1. In [5], Araneda obtained an analytical expression for the transition probability density function of solutions to the Equation (3). However, the stochastic integral with respect to \(B^H\) considered there is interpreted as a Wick-Itô integral. This integral is different from the pathwise Stieltjes integral using in our work (the relation between two integrals can be found in §5.6 of [10]). In particular, unlike the Wick-Itô integral, the pathwise Stieltjes integral has non-zero expectation. We therefore think that it is not easy to extend the method developed in [5] to the setting of pathwise Stieltjes integrals. That is why we have to employ a different method to investigate the tail distributions as in Theorem 1.

Remark 2. The transition probability density and tail distribution can be used to compute the price of options. In the setting of the mixed-fractional CEV model using pathwise Stieltjes integrals, to the best of our knowledge, the option pricing formula is still an open problem. Solving this problem is beyond the scope of the present paper. However, if such a formula exists then the tail distribution estimates obtained in Theorem 1 will be useful to provide an upper bound for the price of options.

4. Conclusion

In this paper, we used the techniques of Malliavin calculus to estimate the tail distribution of the mixed-frational CEV model. Our contribution is that we are able to obtain an explicit estimate for the tail distributions. Our work provides one more fundamental property of CEV models. In this sense, we partly enrich the knowledge of CEV models.

Acknowledgments:

The authors would like to thank the anonymous referees for their valuable comments.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest

"The authors declare no conflict of interest".

References

  1. Nguyen, T. D. (2018). Tail estimates for exponential functionals and applications to SDEs. Stochastic Processes and their Applications, 128(12), 4154-4170. [Google Scholor]
  2. Mishura, Y., & Zili, M. (2018). Stochastic analysis of mixed fractional Gaussian processes. Elsevier Ltd, Oxford, 2018. [Google Scholor]
  3. Nualart, D. (2006). The Malliavin calculus and related topics (Vol. 1995). Springer-Verlag, Berlin, second edition, 2006. [Google Scholor]
  4. Zähle, M. (1998). Integration with respect to fractal functions and stochastic calculus I. Probability Theory and Related Fields, 111(3), 333-374. [Google Scholor]
  5. Araneda, A. A. (2020). The fractional and mixed-fractional CEV model. Journal of Computational and Applied Mathematics, 363, 106-123. [Google Scholor]
  6. Dung, N. T., & Son, T. C. (2019). Tail distribution estimates for one-dimensional diffusion processes. Journal of Mathematical Analysis and Applications, 479(2), 2119-2138. [Google Scholor]
  7. De Marco, S. (2011). Smoothness and asymptotic estimates of densities for SDEs with locally smooth coefficients and applications to square root-type diffusions. The Annals of Applied Probability, 21(4), 1282-1321. [Google Scholor]
  8. Dung, N. T. (2014). Jacobi processes driven by fractional Brownian motion. Taiwanese Journal of Mathematics, 18(3), 835-848. [Google Scholor]
  9. Sugita, H. (1985). On a characterization of the Sobolev spaces over an abstract Wiener space. Journal of Mathematics of Kyoto University, 25(4), 717-725. [Google Scholor]
  10. Biagini, F., Hu, Y., Øksendal, B., & Zhang, T. (2008). Stochastic calculus for fractional Brownian motion and applications. Springer-Verlag London, Ltd., London. [Google Scholor]