Rate of convergence in total variation for the generalized inverse Gaussian and the Kummer distributions

Author(s): Essomanda KONZOU 1
1Institut Elie Cartan de Lorraine, UMR CNRS 7502, Université de Lorraine; Laboratoire d’Analyse, de Modélisations Mathématiques et Applications, Université de Lomé, Lomé;
Copyright © Essomanda KONZOU. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The generalized inverse Gaussian distribution converges in law to the inverse gamma or the gamma distribution under certain conditions on the parameters. It is the same for the Kummer’s distribution to the gamma or beta distribution. We provide explicit upper bounds for the total variation distance between such generalized inverse Gaussian distribution and its gamma or inverse gamma limit laws, on the one hand, and between Kummer’s distribution and its gamma or beta limit laws on the other hand.

Keywords: Total variation distance; Generalized inverse Gaussian distribution; Kummer’s distribution; Gamma distribution: Inverse gamma distribution; Beta distribution.

1. Introduction

The generalized inverse Gaussian (hereafter GIG) distribution with parameters \(p \in R, a > 0, b > 0\) has density

\begin{equation} \label{dgig} g_{p,a,b}(x)= \dfrac{\left( a/b\right)^{p/2}}{2K_{p}(\sqrt{ab})}x^{p-1}e^{-\frac{1}{2}\left( ax+b/x\right)}, \quad x>0, \end{equation}
(1)
where \(K_p\) is the modified Bessel function of the third kind.

In [1], the authors have established the rate of convergence of the GIG distribution to the gamma distribution by Stein’s method. In order to compare the rate of convergence obtained via Stein’s method with the rate obtained by using another distance, the authors have established an explicit upper bound of the total variation distance between the GIG random variable and the gamma random variable, which is of order \( n ^ {-1/4} \) for the case \( p = \frac{1}{2} \). We generalize this result by providing the order of the rate of convergence in total variation of the GIG distribution to the gamma distribution for all \( p = k + \frac{1}{2} \), \( k \in N \). In particular, we obtain a rate of convergence of order \( n ^ {- 1/2} \) for \( p = \frac{1}{2} \), which is better than the one in [1].

For \(a >0\), \(b\in R\), \(c >0\), the Kummer distribution \(K(a, b, c)\) has density function

\begin{equation} \label{dkummer} k_{ a, b, c}(x)=\frac{ 1 }{\Gamma (a)\psi (a, 1-b;c)} x^{a-1} (1+x)^{-a-b} e^{- c x}, \ (x>0) \end{equation}
(2)
where \(\psi\) is the confluent hypergeometric function of the second kind and \(\Gamma\) is the gamma function. Details on the GIG and the Kummer distributions can be found in [1,2,3,4,5] and references therein.

For \(\theta>0\), \(\lambda >0\), the gamma distribution \(\gamma(\theta,\lambda)\) has density function

\begin{equation*} \gamma(\theta,\lambda)(x)=\frac{ \lambda^{\theta} }{\Gamma (\theta)} x^{\theta-1} e^{- \lambda x} I_{\{x>0\}}. \end{equation*} For \(\theta>0\), \(\lambda >0\), the inverse gamma distribution \(I\gamma(\theta,\lambda)\) has density function \begin{equation*} I\gamma(\theta,\lambda)(x)=\frac{ \lambda^{\theta} }{\Gamma (\theta)} x^{-\theta-1} e^{- \lambda/ x} I_{\{x>0\}}. \end{equation*} The beta distributions of type 2 \(\beta^{(2)}(a,b)\) has density \begin{equation*} \beta^{(2)}(x)=\dfrac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{a-1}(1+x)^{-a-b} I_{\{x>0\}}, \quad a>0, \ \ b>0. \end{equation*} We have the following definition and a Property of the total variation distance.

Definition 1. Let \(W\) and \(Z\) be two continuous real random variables, with density \(f_W\) and \(f_Z\) respectively. Then, the total variation distance between \(W\) and \(Z\) is given by

\begin{equation} d_{TV}(W,Z)=\frac{1}{2}\displaystyle\int_{ R}\left| f_W(x)-f_Z(x)\right| dx. \end{equation}
(3)

Property 1. Consider \(W\) and \(Z\) be two continuous random variables. Let \(f_W\) (resp. \(f_Z\)) the density of \(W\) (resp. \(Z\)) on \((0,\infty)\). Assume that the function \(x\mapsto f_W(x)-f_Z(x)\) has a unique zero \(\lambda\) on \((0,\infty)\).

  1. If \(f_W(x)-f_Z(x) \) is positive for \(x \lambda\), then \[d_{TV}(W,Z)=\int_{0}^{\lambda}f_W(x)-f_Z(x)dx.\]
  2. If \(f_W(x)-f_Z(x) \) is negative for \( x \lambda\), then \[d_{TV}(W,Z)=\int_{0}^{\lambda}f_Z(x)-f_W(x)dx.\]

Proof. Let \(F_W\) (resp. \(F_Z\)) be the distribution function of \(W\) (resp. \(Z\)). If \(f_W(x)-f_Z(x) \) is positive for \(x \lambda\), then \[ \begin{split} d_{TV}(W,Z)&=\frac{1}{2}\displaystyle\int_{0}^{\infty}\left| f_W(x)-f_Z(x)\right| dx\\ &=\frac{1}{2}\displaystyle\int_{0}^{\lambda} f_W(x)-f_Z(x) dx-\frac{1}{2}\displaystyle\int_{\lambda}^{\infty} f_W(x)-f_Z(x) dx\\ &=\frac{1}{2}\displaystyle\int_{0}^{\lambda} f_W(x)-f_Z(x) dx+\frac{1}{2}[F_W(\lambda)-F_Z(\lambda)]\\ &=\frac{1}{2}\displaystyle\int_{0}^{\lambda} f_W(x)-f_Z(x) dx+\frac{1}{2}\displaystyle\int_{0}^{\lambda} f_W(x)-f_Z(x) dx\\ &=\displaystyle\int_{0}^{\lambda} f_W(x)-f_Z(x) dx\\ \end{split} \] which proves the item 1. For item 2, using similar arguments as in the previous case leads to the result.

Remark 1. The support of the densities may be any interval, but here we take this support to be \((0,\infty)\) in the purpose of the application to the GIG and Kummer’s distributions.

The aim of this paper is to provide a bound for the distance between a GIG (resp. a Kummer’s) random variable and its limiting inverse gamma or gamma variables (resp. gamma or beta variables), and therefore to give a contribution to the study of the rate of convergence in the limit theorems involved. Section 2 presents the main results and their proofs in Section 3.

2. Main results

2.1. On the rate of convergence of the generalized inverse Gaussian distribution to the inverse gamma distribution

The first main result is presented in Theorem 1 below. We recall the convergence of the GIG distribution to the inverse gamma distribution as Proposition 1.

Proposition 1. For \(k\in N\), \(b>0\), let \((X_n)_{n\geq 1}\) be a sequence of random variables such that \(X_n\sim GIG\left( -k-\frac{1}{2},\dfrac{1}{n},b\right) \) for each \(n\geq 1\). Then, as \(n\to\infty\), the sequence \((X_n)_{n\geq 1}\) converges in law to a random variable \(X\) following the \(I\gamma\left( k+\frac{1}{2},\frac{b}{2}\right) \) distribution.

Theorem 1. Under the assumptions and notations of Proposition 1, we have:

\begin{equation} d_{TV}(X_n,X)\leq \dfrac{1}{\sqrt{n}}\times\sqrt{b}. \end{equation}
(4)

Remark 2. The upper bound provided by Theorem 1 is of order \(n^{-1/2}\).

Table 1 and Table 2 are some numerical results for \(k = 0\). This case is particularly interesting since it corresponds to the inverse Gaussian distribution used in data analysis when the observations are highly right-skewed [6,7]. The inverse Gaussian law is the distribution of the first hitting time for a Brownian motion [8].

Table 1. Numerical values for \(b=0.1\) and \(k=0\).
\(n\) \(d_{TV}(X_n,X)\) \(\dfrac{1}{\sqrt{n}}\times\sqrt{b}\)
\(1000\) 0.008963786 0.01
\(10000\) 0.002983103 0.003162278
\(100000\) 0.0004934534 0.001
\(1000000\) 0.0001549545 0.0003162278
\(10000000\) 4.948836\(\times\) 10\(^{-5}\) 0.0001
\(100000000\) 1.570466\(\times\) 10\(^{-5}\) 3.162278\(\times\) 10\(^{-5}\)
Table 2. Numerical values for \(b=1\) and \(k=0\).
\(n\) \(d_{TV}(X_n,X)\) \(\dfrac{1}{\sqrt{n}}\times\sqrt{b}\)
\(1000\) 0.02614564 0.03162278
\(10000\) 0.008963782 0.01
\(100000\) 0.002971153 0.003162278
\(1000000\) 0.0004843202 0.001
\(10000000\) 0.0001553049 0.0003162278
\(100000000\) 4.927859\(\times\) 10\(^{-5}\) 0.0001

2.2. On the rate of convergence of the generalized inverse Gaussian distribution to the gamma distribution

Theorem 2. For \(p>0\), \(a>0\), let \((Y_n)_{n\geq 1}\) be a sequence of random variables such that \(Y_n\sim GIG\left( p,a,\dfrac{1}{n}\right)\) for each \(n\geq 1\). As \(n\to \infty\), the sequence \((Y_n)\) converges in distribution to a random variable \(\Lambda\) following the \(\gamma\left( p,\frac{a}{2}\right) \) distribution.

\begin{equation} d_{TV}(Y_n,\Lambda)\leq \dfrac{1}{\sqrt{n}}\times\dfrac{\sqrt{a}K_{p-1}\left( \sqrt{\frac{a}{n}}\right) }{2pK_{p}\left( \sqrt{\frac{a}{n}}\right)}+\frac{1}{n^{p+1}}\times\left( \frac{1}{\ln (\alpha_n/\alpha)}\right)^p\times\frac{a\alpha}{2^{p+2}p^2(1+p)} \end{equation}
(5)
where \(\alpha_n=\dfrac{(an)^{p/2}}{2K_{p}\left( \sqrt{\frac{a}{n}}\right)}\) and \(\alpha=\dfrac{(a/2)^p}{\Gamma(p)}.\)

Corollary 1. The upper bound provided by Theorem 2 is of order \(n^{-1/2}\) for \(p=\dfrac{1}{2}\) and of order \(n^{-1}\) for all \(p\) of the form \(p= k+\frac{1}{2}\), \(k\geq 1\), \(k\) integer.

Remark 3. In [1], by Stein method, the authors have established an explicit upper bound of \(\left| h(Y_n)-h(\Lambda)\right|\) given a regular function \(h\) in \(C_3^b\), the class of bounded functions \(h: R_+ \to R\) for which \(h’\), \(h”\), \(h^{(3)}\) exist and are bounded. For \(p= k+\frac{1}{2}\), \(k\geq 1\), \(k\) integer, the upper bound provided in [1] by Stein method is of order \(n^{-1}\) (Proposition 3.3). This is the same in our result. In addition, our upper bound is quite simple when compared to the one in [1] obtained by Stein’s method (Theorem 3.1), and sharper than the one obtained in Proposition 3.4 [1].

2.3. On the rate of convergence of the Kummer distribution to the gamma distribution

As in the previous subsection, the following theorem contains the rate of convergence in total variation of the Kummer distribution to the gamma distribution.

Theorem 3. Let \((V_n)_{n\ge 1}\) be a sequence of random variables such that \(V_n \sim K\left( a,-a +\frac{1}{n},c\right) \)with \(a>0\), \(c>0\). Then,

  • 1. As \(n\to \infty\), the sequence \((V_n)\) converges in distribution to a random variable \(\Lambda\) following the \(\gamma(a,c)\) distribution.
  • 2.
    \begin{equation} \label{dvt Kummer} d_{TV}(V_n,\Lambda)\leq \dfrac{\delta}{na} \dfrac{1}{\left(a-\frac{1}{n}\right)}(\delta_n/\delta)^{an} \end{equation}
    (6)
    where \(\delta_n=\frac{ 1 }{\Gamma (a)\psi \left( a, 1+a-\frac{1}{n};c\right) }\) and \(\delta=\frac{ c^a }{\Gamma (a)}.\)
Tables 3 and 4 present the numerical results for fixed values \(a\), \(c\) and \(n\). The Upper bound is \(\dfrac{\delta}{na} \dfrac{1}{\left(a-\frac{1}{n}\right)}(\delta_n/\delta)^{an}\).
Table 3. Numerical results for \(a=c=1\).
\(n\) \(d_{TV}(V_n,\Lambda)\) Upper bound
\(1000\) 0.0001721703 0.001817133
\(10000\) 1.721839\(\times 10^{-5}\) 1.815646 \(\times 10^{-4}\)
\(100000\) 1.721869\(\times 10^{-6}\) 1.815546\(\times 10^{-5}\)
\(1000000\) 1.722037\(\times 10^{-7}\) 1.816018\(\times 10^{-6}\)
\(10000000\) 1.723704\(\times 10^{-8}\) 1.820897\(\times 10^{-7}\)
\(100000000\) 1.740368\(\times\) 10\(^{-9}\) 1.870423 \(\times 10^{-8}\)
Table 4. Numerical results for \(a=1.5\) and \(c=3\).
\(n\) \(d_{TV}(X_n,X)\) Upper bound
\(1000\) 0.0001045401 0.005830092
\(10000\) 1.045445\(\times 10^{-5}\) 5.828016 \(\times 10^{-4}\)
\(100000\) 1.045512\(\times 10^{-6}\) 5.82978\(\times 10^{-5}\)
\(1000000\) 1.046143\(\times 10^{-7}\) 5.849711\(\times 10^{-6}\)
\(10000000\) 1.052453\(\times 10^{-8}\) 6.053044\(\times 10^{-7}\)
\(100000000\) 1.360213\(\times\) 10\(^{-9}\) 8.518632 \(\times 10^{-8}\)

2.4. On the rate of convergence of the Kummer distribution to the beta distribution

We have the following result.

Theorem 4. Let \((W_n)_{n\ge 1}\) be a sequence of random variables such that \(W_n \sim K\left( a,b,\frac{1}{n}\right) \)with \(a>0\), \(b>0\). Then,

  • 1. As \(n\to \infty\), \((W_n)\) converges in law to a random variable \(W\) following the \(\beta(a,b)\) distribution.
  • 2.
    \begin{equation} \label{dvt Kummerb} d_{TV}(W_n,W)\leq \dfrac{1}{n}\times \dfrac{\varphi_n\Gamma(a)\Gamma(b)}{(a+b)\Gamma (a+b)} + \dfrac{(a+b+1)\varphi_n\Gamma(a)\Gamma(b)}{(a+b)\Gamma (a+b)}\ln (\varphi_n/\varphi) \end{equation}
    (7)
    where \(\varphi_n=\frac{ 1 }{\Gamma (a)\psi \left( a, 1-b;\dfrac{1}{n}\right) }\) and \(\varphi=\frac{ \Gamma(a+b) }{\Gamma (a)\Gamma(b)}.\)

Remark 4. As \(n\to\infty\), \(\varphi_n\to\varphi\). Therefore, the upper bound provided in (7) is of order \(n^{-1}\).

3. Proofs of main results

Proof of Proposition 1. For all \(x>0\), \[ \mathbb{P}\left( X_n < x\right) = \dfrac{(\sqrt{bn})^{k+\frac{1}{2}}}{2K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{1}{2}\left( \frac{1}{n}t+b/t\right)}dt. \] We now use the well-known fact that (see for instance [9,10]), as \(x\to 0\),

\begin{equation} \label{appoxi Kp} K_p(x)\sim \begin{cases} 2^{|p|-1}\Gamma(|p|)x^{-|p|}, \quad p\ne 0\\ -\log x ,\quad p=0 \end{cases} \end{equation}
(8)
to see that \[\lim\limits_{n\to\infty}\dfrac{(\sqrt{bn})^{k+\frac{1}{2}}}{2K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}=\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }.\] For all integer \(n\geq 1\), \(t^{-k-\frac{3}{2}}e^{-\frac{1}{2}\left( \frac{1}{n}t+b/t\right)}\leq t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}\). The function \(t\mapsto t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}\) is integrable on \((0, \infty)\). By the Lebesgue’s Dominated Convergence Theorem: \(\lim\limits_{n\to\infty}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{1}{2}\left( \frac{1}{n}t+b/t\right)}dt=\displaystyle\int_{0}^{x} t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dt.\) Hence \[\lim\limits_{n\to\infty}\mathbb{P}\left( X_n< x\right)=\displaystyle\int_{0}^{x}\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }t^{-k-\frac{3}{2}} e^{- \frac{b}{2t} }dt.\]

Proof of Theorem 1. Let \(g_n\) and \(g\) the densities of \(X_n\sim GIG\left( -k-\frac{1}{2},\dfrac{1}{n},b\right) \) and \(X\sim I\gamma\left( k+\frac{1}{2},\frac{b}{2}\right) \) distributions respectively. Let \(\beta_n=\dfrac{(\sqrt{bn})^{k+\frac{1}{2}}}{2K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\) and \(\beta=\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }.\) We have \(g_n(x)=\beta_n x^{-k-\frac{3}{2}}e^{-\frac{1}{2}\left( \frac{1}{n}x+b/x\right)}\) and \(g(x)=\beta x^{-k-\frac{3}{2}} e^{- \frac{b}{2x}}.\) Which gives \(g_n(x)-g(x)=\left( \beta_n e^{-\frac{1}{2n}x} -\beta \right)x^{-k-\frac{3}{2}} e^{- \frac{b}{2x}}.\) Now, let \(v_n(x)= \beta_n e^{-\frac{1}{2n}x} -\beta \), then \(v_n\) is decreasing on \((0,+\infty)\) with \( \lim\limits_{x\to 0^+}v_n(x)=\beta_n-\beta\) and \(\lim\limits_{x\to +\infty}v_n(x)=-\beta< 0.\) Also, \[ \begin{split} \beta_n-\beta&=\dfrac{(\sqrt{bn})^{k+\frac{1}{2}}}{2K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }\\ &=\dfrac{(\sqrt{bn})^{k+\frac{1}{2}}}{2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }\\ &=\frac{ 1 }{2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\left[ \left( \sqrt{bn}\right) ^{k+\frac{1}{2}}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right) \right]\\ &=\frac{ 1 }{2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\left[ \left( \sqrt{bn}\right) ^{k+\frac{1}{2}}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }\int_{0}^{+\infty} x^{k-\frac{1}{2}} e^{-\frac{1}{2}\sqrt{\frac{b}{n}}\left( x+\frac{1}{x}\right) }dx \right] \\ &>\frac{ 1 }{2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\left[ \left( \sqrt{bn}\right) ^{k+\frac{1}{2}}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }\int_{0}^{+\infty} x^{k-\frac{1}{2}} e^{-\frac{1}{2}\sqrt{\frac{b}{n}}x }dx \right]\\ &=\frac{ 1 }{2K_{k+\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\left[ \left( \sqrt{bn}\right) ^{k+\frac{1}{2}}-\frac{ b^{k+\frac{1}{2}}}{2^{k+\frac{1}{2}} \Gamma \left( k+\frac{1}{2}\right) }\left( 2\sqrt{\frac{n}{b}}\right) ^{k+\frac{1}{2}}\int_{0}^{+\infty} t^{k-\frac{1}{2}} e^{-t }dt \right]=0. \end{split} \] Then \(v_n\) have a unique zero \(\lambda_n=2n\ln (\beta_n/\beta)\) on \((0,\infty)\). Hence \(g_n(x)-g(x)> 0\) if \(x< \lambda_n\) and \(g_n(x)-g(x)< 0\) if \(x>\lambda_n\). Using Property 1, we have: \[ d_{TV}(X_n,X)= \displaystyle\int_{0}^{\lambda_n} g_n(x)-g(x) dx. \] Then integrating \(\displaystyle\int_{0}^{\lambda_n} g_n(x) dx\) by part, we get: \[ \begin{split} d_{TV}(X_n,X)&=\left[\beta_n e^{-\frac{1}{2n}x}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dt\right]_0^{\lambda_n}+\frac{\beta_n}{2n} \displaystyle\int_{0}^{\lambda_n}e^{-\frac{1}{2n}x}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dtdx- \beta\displaystyle\int_{0}^{\lambda_n}x^{-k-\frac{3}{2}}e^{-\frac{b}{2x}}dx\\ &=\beta_n e^{-\frac{1}{2n}\lambda_n}\displaystyle\int_{0}^{\lambda_n}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dt+\frac{\beta_n}{2n} \displaystyle\int_{0}^{\lambda_n}e^{-\frac{1}{2n}x}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dtdx- \beta\displaystyle\int_{0}^{\lambda_n}x^{-k-\frac{3}{2}}e^{-\frac{b}{2x}}dx\\ &=\beta\displaystyle\int_{0}^{\lambda_n}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dt+\frac{\beta_n}{2n} \displaystyle\int_{0}^{\lambda_n}e^{-\frac{1}{2n}x}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dtdx- \beta\displaystyle\int_{0}^{\lambda_n}x^{-k-\frac{3}{2}}e^{-\frac{b}{2x}}dx\\ &=\frac{\beta_n}{2n} \displaystyle\int_{0}^{\lambda_n}e^{-\frac{1}{2n}x}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}dtdx. \end{split} \] Since \(x\mapsto e^{-\frac{1}{2n}x}\) is decreasing and positive on \((0,\infty)\), for all \(x\) and \(t\) such that \(0< t\leq x\), \(1\leq \dfrac{e^{-\frac{1}{2n}t}}{e^{-\frac{1}{2n}x}}\), we have: \[ \begin{split} d_{TV}(X_n,X)&\leq \frac{\beta_n}{2n} \displaystyle\int_{0}^{\lambda_n}\displaystyle\int_{0}^{x}t^{-k-\frac{3}{2}}e^{-\frac{b}{2t}}e^{-\frac{1}{2n}t}dtdx\\ &=\frac{1}{2n} \displaystyle\int_{0}^{\lambda_n}\displaystyle\int_{0}^{x}\beta_nt^{-k-\frac{3}{2}}e^{-\frac{1}{2}\left( \frac{1}{n}t+b/t\right) }dtdx\\ &\leq \frac{1}{2n} \displaystyle\int_{0}^{\lambda_n}dx\\ &=\frac{1}{2n}\lambda_n\\ &=\ln(\beta_n/\beta). \end{split} \] So \[K_{-1/2}\left( \sqrt{\frac{b}{n}}\right) =\sqrt{\frac{\pi}{2\sqrt{\frac{b}{n}}}}e^{-\sqrt{\frac{b}{n}}}\implies\ln( \beta_n/\beta)=\ln\left( e^{\sqrt{\frac{b}{n}}}\right) =\dfrac{1}{\sqrt{n}}\times\sqrt{b}\ \ \text{for}\ \ k=0,\] and \[K_{-3/2}\left( \sqrt{\frac{b}{n}}\right) =\sqrt{\frac{\pi}{2\sqrt{\frac{b}{n}}}}e^{-\sqrt{\frac{b}{n}}}\left( 1+\frac{\sqrt{n}}{\sqrt{b}}\right) \implies\ln( \beta_n/\beta)=\ln\left( \dfrac{e^{\sqrt{\frac{b}{n}}}}{1+\sqrt{\dfrac{b}{n}}}\right) \leq \dfrac{1}{\sqrt{n}}\times\sqrt{b}\ \ \text{for}\ \ k=1.\] For \(k\geq 2\), since \(K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right) =\sqrt{\frac{\pi}{2\sqrt{\frac{b}{n}}}}e^{-\sqrt{\frac{b}{n}}}\left( 1+\displaystyle\sum_{i=1}^{k}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}\right)\) and \(\Gamma\left( k+\frac{1}{2}\right) =\dfrac{(2k)!\sqrt{\pi}}{2^{2k}k!} ,\) so, we have \[ \begin{split} \beta_n/\beta &=\dfrac{\Gamma\left( k+\frac{1}{2}\right)}{\left( \sqrt{\frac{b}{n}}\right) ^{k+\frac{1}{2}} 2^{\frac{1}{2}-k}K_{-k-\frac{1}{2}}\left( \sqrt{\frac{b}{n}}\right)}\\ &=\dfrac{(2k)!e^{\sqrt{\frac{b}{n}}}}{k!2^{k}\left( \sqrt{\frac{b}{n}}\right) ^{k} \left( 1+\displaystyle\sum_{i=1}^{k}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}\right)}\end{split} \] \[ \begin{split} &=\dfrac{(2k)!e^{\sqrt{\frac{b}{n}}}}{k!2^{k}\left( \sqrt{\frac{b}{n}}\right) ^{k} \left( 1+\displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}+\frac{(2k)!}{k!}2^{-k}\left( \sqrt{\frac{b}{n}}\right) ^{-k}\right)}\\ &=\dfrac{(2k)!e^{\sqrt{\frac{b}{n}}}}{k!2^{k}\left( \sqrt{\frac{b}{n}}\right) ^{k} \left( 1+\displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}\right)+(2k)!}\\ &=\dfrac{e^{\sqrt{\frac{b}{n}}}}{1+\frac{k!2^{k}}{(2k)!} \left( \left( \sqrt{\frac{b}{n}}\right) ^{k}+ \left( \sqrt{\frac{b}{n}}\right) ^{k} \times \displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}\right)}. \end{split} \] Therefore, for \(k\geq 2\), we have \[\ln(\beta_n/\beta)=\ln\left( \dfrac{e^{\sqrt{\frac{b}{n}}}}{1+\frac{k!2^{k}}{(2k)!} \left( \left( \sqrt{\frac{b}{n}}\right) ^{k}+ \left( \sqrt{\frac{b}{n}}\right) ^{k} \times \displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{b}{n}}\right) ^{-i}\right)}\right)\leq \dfrac{1}{\sqrt{n}}\times\sqrt{b}. \]

Proof of Theorem 2. Let \(\alpha_n=\dfrac{(an)^{p/2}}{2K_{p}\left( \sqrt{\frac{a}{n}}\right)}\) and \(\alpha=\frac{ (a/2)^p }{\Gamma (p)}.\) Denote by \(h_n\) (rep. \(\gamma\)) the density of \(Y_n\sim GIG\left( p,a,\dfrac{1}{n}\right)\) (resp. \(Y\sim \gamma (p,a/2)\)). We have \(h_n(x)=\alpha_n x^{p-1}e^{-\frac{1}{2}\left( ax+\frac{1}{nx}\right)}\) and \(\gamma(x)=\alpha x^{p-1} e^{- \frac{a}{2} x}.\) Which gives \( h_n(x)-\gamma(x)= \left( \alpha_n e^{-\frac{1}{2nx}} -\alpha \right)x^{p-1} e^{- \frac{a}{2} x} \) is negative if \(x\leq r_n=\dfrac{1}{2n\ln \left( \dfrac{\alpha_n}{\alpha}\right) }.\) Hence \[d_{TV}(Y_n,Y)= \displaystyle\int_{0}^{\lambda_n}\gamma(x)-g_n(x) dx=\frac{\alpha_n}{2n}\int_{0}^{r_n}\dfrac{1}{x^2}e^{-\frac{1}{2nx}}\int_{0}^{x}t^{p-1}e^{-\frac{a}{2}t}dtdx.\] Integration by part of \(\displaystyle\int_{0}^{x}t^{p-1}e^{-\frac{a}{2}t}dt\) leads to \[d_{TV}(Y_n,Y)\leq \frac{\alpha_n}{2np}\int_{0}^{r_n}x^{p-2}e^{-\frac{1}{2}\left( ax+\frac{1}{nx}\right)}dx +\frac{\alpha_na}{4np(1+p)}\int_{0}^{r_n}x^{p-1}e^{-\frac{1}{2nx}}dx=A_n+B_n, \] where \[\begin{split} A_n&=\frac{\alpha_n}{2np}\int_{0}^{r_n}x^{p-2}e^{-\frac{1}{2}\left( ax+\frac{1}{nx}\right)}dx= \frac{1}{2np}\dfrac{(an)^{p/2}}{K_{p}\left( \sqrt{\frac{a}{n}}\right)}\times \dfrac{K_{p-1}\left( \sqrt{\frac{a}{n}}\right)}{(an)^{\frac{p-1}{2}}} \int_{0}^{r_n}\dfrac{(an)^{\frac{p-1}{2}}}{K_{p-1}\left( \sqrt{\frac{a}{n}}\right)}x^{(p-1)-1}e^{-\frac{1}{2}\left( ax+\frac{1}{nx}\right)}dx\\ &\leq \frac{1}{2np}\dfrac{(an)^{p/2}}{K_{p}\left( \sqrt{\frac{a}{n}}\right)}\times \dfrac{K_{p-1}\left( \sqrt{\frac{a}{n}}\right)}{(an)^{\frac{p-1}{2}}}=\frac{\sqrt{a}K_{p-1}\left( \sqrt{\frac{a}{n}}\right)}{2\sqrt{n}pK_{p}\left( \sqrt{\frac{a}{n}}\right)}, \end{split} \] and \[ B_n=\dfrac{\alpha_na}{4np(1+p)}\int_{0}^{r_n}x^{p-1}e^{-\frac{1}{2nx}}dx \leq \dfrac{\alpha_na}{4np^2(1+p)}r_n^{p}e^{-\frac{1}{2nr_n}} = \dfrac{\alpha a}{2^{p+2} p^2(1+p)n^{p+1}}\dfrac{1}{\left( \ln (\alpha_n/\alpha)\right) ^p}. \]

Proof of Corollary 1. By equivalence (8), as \(n\to +\infty\), we have \begin{equation*} \dfrac{1}{\sqrt{n}}\times\dfrac{\sqrt{a}K_{p-1}\left( \sqrt{\frac{a}{n}}\right) }{2pK_{p}\left( \sqrt{\frac{a}{n}}\right)}\sim \begin{cases} \dfrac{1}{n}\times \dfrac{a}{4p(p-1)}& \text{if} \ \ p>1,\\[4mm] \dfrac{1}{n^p}\times \dfrac{a^p\Gamma (1-p)}{2^{2p-1}\Gamma (p)}& \text{if}\ \ 0< p< 1,\\[4mm] \dfrac{a\log (n)}{4n}- \dfrac{a\log (a)}{4n} & \text{if} \ \ p=1. \end{cases} \end{equation*} Since \(K_{1/2}\left( \sqrt{\frac{a}{n}}\right) =\sqrt{\frac{\pi}{2\sqrt{\frac{a}{n}}}}e^{-\sqrt{\frac{a}{n}}}\), we have \[\frac{1}{n^{3/2}}\times\left( \frac{1}{\ln (\alpha_n/\alpha)}\right)^{1/2} \ \ \substack{\sim \\ n\to \infty} \ \ \dfrac{1}{n^{5/4}}\times\dfrac{1}{a^{1/4}}.\] For \(p=\frac{3}{2}\), \(\ln( \alpha_n/\alpha)=\ln\left( \dfrac{e^{\sqrt{\frac{a}{n}}}}{1+\sqrt{\dfrac{a}{n}}}\right)=\ln\left( \frac{e^X}{1+X}\right) \) where \(X=\sqrt{\dfrac{a}{n}}\to 0\) as \(n\to \infty\). We have \[ \frac{e^X}{1+X}=\frac{1+X+\frac{X^2}{2}+o\left( \frac{X^2}{2}\right)}{1+X}= 1+\frac{X^2}{2}+o\left( \frac{X^2}{2}\right)=1+\frac{a}{2n}+o\left( \frac{1}{n}\right).\] Hence \[\frac{1}{n^{5/2}}\times\left( \frac{1}{\ln (\alpha_n/\alpha)}\right)^{3/2} \ \ \substack{\sim \\ n\to \infty} \ \ \dfrac{1}{n}\times\left( \dfrac{2}{a}\right) ^{3/2}.\] For all \(p=k+1/2\), \(k\geq 2\), \(k\) integer, we have \[\begin{split} \left( \frac{1}{\ln (\alpha_n/\alpha)}\right)^p=\dfrac{1}{\left[ \ln\left( \dfrac{e^{\sqrt{\frac{a}{n}}}}{1+\frac{k!2^{k}}{(2k)!} \left( \left( \sqrt{\frac{a}{n}}\right) ^{k}+ \left( \sqrt{\frac{a}{n}}\right) ^{k} \times \displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{a}{n}}\right) ^{-i}\right)}\right)\right] ^{k+1/2}}. \end{split} \] Let \(X=\sqrt{\frac{a}{n}}\) and \(D_k=1+\frac{k!2^{k}}{(2k)!} \left( \left( \sqrt{\frac{a}{n}}\right) ^{k}+ \left( \sqrt{\frac{a}{n}}\right) ^{k} \times \displaystyle\sum_{i=1}^{k-1}\dfrac{(k+i)!}{i!(k-i)!}\left( 2\sqrt{\frac{a}{n}}\right) ^{-i}\right).\) For \(k=2\), we have \(D_2=1+\frac{1}{3}(X^2+3X)=1+\frac{1}{3}X+X^2.\) By induction on \(k\), \(D_k\) can be written in the form \[ D_k=1+X+\dfrac{k-1}{2k-1}X^2 +c_3X^3+\cdots+c_kX^k, \quad c_3,\cdots,c_k \in R. \] Since \(X\to0\) as \(n\to\infty\), we have \(e^{\sqrt{\frac{a}{n}}}=e^X=1+X+\dfrac{X^2}{2!}+\cdots+ \dfrac{X^{k+1}}{(k+1)!}+o\left( X^{k+1}\right) ,\) and, by doing the Euclidean division as in the case \(p=\frac{3}{2}\) (\(k=1\)), there exist constants \(b_3,\cdots, b_{k+1}\) such that, \[ \begin{split} \dfrac{e^X}{D_k}&=1+\dfrac{1}{2(2k-1)}X^2 +b_3X^3+\cdots+b_kX^k+b_{k+1}X^{k+1} +o\left( X^{k+1}\right) \\ &=1+b_2\frac{a}{n} +b_3\left( \frac{a}{n}\right) ^{3/2}+\cdots+b_k\left( \frac{a}{n}\right) ^{k/2}+b_{k+1}\left( \frac{a}{n}\right) ^{\frac{k+1}{2}} +o\left( \frac{1}{n^{\frac{k+1}{2}}}\right), \end{split} \] \[b_2=\dfrac{1}{2(2k-1)}\ne 0.\] Hence \[\frac{1}{n^{k+3/2}} \frac{1}{\left[\ln (\alpha_n/\alpha)\right]^{k+1/2}}\quad\substack{\sim \\ n\to\infty}\quad \dfrac{1}{n\left[b_2a +b_3a^{3/2}\times \frac{1}{n^{1/2}} +\cdots+b_{k+1}a^{\frac{k+1}{2}}\times \frac{1}{n^{\frac{k-1}{2}}} \right] ^{k+1/2}}. \]

Proof of Theorem 3. Let \(\theta_n=\left( \delta_n/\delta\right) ^n-1\), with \(\delta_n=\frac{ 1 }{\Gamma (a)\psi \left( a, 1+a-\frac{1}{n};c\right) }\) and \(\delta=\frac{ c^a }{\Gamma (a)}.\) As in the GIG case, we have \[\begin{split} d_{TV}(V_n,\Lambda)&=\dfrac{1}{2}\int_{0}^{\infty}\left|\delta_nx^{a-1}(1+x)^{-\frac{1}{n}}e^{-cx}-\delta x^{a-1}e^{-cx} \right|dx \\ &=\dfrac{\delta_n}{n}\int_{0}^{\theta_n}(1+x)^{-\frac{1}{n}-1}\int_{0}^{x}t^{a-1}e^{-ct}dtdx\\ & \leq \dfrac{\delta_n}{na}\int_{0}^{\theta_n}(1+x)^{-\frac{1}{n}-1}x^adx\\ & = \dfrac{\delta_n}{na}\int_{0}^{\theta_n}(1+x)^{a-\frac{1}{n}-1}\left( \dfrac{x}{1+x}\right) ^adx\\ & \leq \dfrac{\delta_n}{na}\int_{0}^{\theta_n}(1+x)^{a-\frac{1}{n}-1}dx\\ & = \dfrac{\delta_n}{na}\left( \dfrac{1}{a-\frac{1}{n}}(1+\theta_n)^{a-\frac{1}{n}}-\dfrac{1}{a-\frac{1}{n}}\right) \\ & \leq \dfrac{\delta_n}{na} \dfrac{1}{\left(a-\frac{1}{n}\right)}(\delta_n/\delta)^{an-1}\\ & = \dfrac{\delta}{na} \dfrac{1}{\left(a-\frac{1}{n}\right)}(\delta_n/\delta)^{an}. \end{split} \]

Proof of Theorem 4. Let \(\sigma_n=n\ln(\varphi_n/\varphi)\) with \(\varphi_n=\frac{ 1 }{\Gamma (a)\psi \left( a, 1-b;\frac{1}{n}\right) }\ \ \text{and} \ \ \varphi=\frac{ \Gamma(a+b) }{\Gamma (a)\Gamma(b)}.\) Then \[ \begin{split} d_{TV}(W_n,W)&=\dfrac{1}{2}\int_{0}^{\infty}\left| \varphi_nx^{a-1}(1+x)^{-a-b}e^{-\frac{1}{n}x}-\varphi x^{a-1}(1+x)^{-a-b}\right| dx\\ &=\int_{0}^{\infty} \varphi_nx^{a-1}(1+x)^{-a-b}e^{-\frac{1}{n}x}-\varphi x^{a-1}(1+x)^{-a-b} dx\\ &=\dfrac{\varphi_n}{n}\int_{0}^{\sigma_n} e^{-\frac{1}{n}x}\int_{0}^{x} t^{a-1}(1+t)^{-a-b} dt dx\\ &=\dfrac{\varphi_n}{n}\int_{0}^{\sigma_n} e^{-\frac{1}{n}x}\left( \frac{1}{a}x^a(1+x)^{-a-b}+\frac{a+b}{a}\int_{0}^{x} t^{a}(1+t)^{-a-b-1} dt\right) dx\\ &=\dfrac{\varphi_n}{na}\int_{0}^{\sigma_n} x^a(1+x)^{-a-b}e^{-\frac{1}{n}x}dx +\frac{(a+b)\varphi_n}{na} \int_{0}^{\sigma_n}e^{-\frac{1}{n}x}\int_{0}^{x} t^{a}(1+t)^{-a-b-1} dt dx\\ &=C_n+D_n, \end{split} \] where \[ \begin{split} C_n &=\dfrac{\varphi_n}{na}\int_{0}^{\sigma_n} x^a(1+x)^{-a-b}e^{-\frac{1}{n}x}dx=\dfrac{\varphi_n}{na}\int_{0}^{\sigma_n} x^a(1+x)^{-a-b-1}(1+x)e^{-\frac{1}{n}x}dx\\ &\leq \dfrac{\varphi_n\Gamma(a+1)\Gamma(b)}{na\Gamma (a+b+1)}(1+\sigma_n)\int_{0}^{\sigma_n}\dfrac{\Gamma (a+b+1)}{\Gamma(a+1)\Gamma(b)} x^{a}(1+x)^{-a-b-1}dx\\ &\leq \dfrac{\varphi_n\Gamma(a+1)\Gamma(b)}{na\Gamma (a+b+1)}(1+\sigma_n)\\ &=\dfrac{1}{n}\times \dfrac{\varphi_n\Gamma(a)\Gamma(b)}{(a+b)\Gamma (a+b)} + \dfrac{\varphi_n\Gamma(a)\Gamma(b)}{(a+b)\Gamma (a+b)}\ln (\varphi_n/\varphi), \end{split} \] and \[D_n=\frac{(a+b)\varphi_n}{na} \displaystyle\int_{0}^{\sigma_n}e^{-\frac{1}{n}x}\int_{0}^{x} t^{a}(1+t)^{-a-b-1} dt dx\leq \dfrac{\varphi_n\Gamma(a)\Gamma(b)}{\Gamma (a+b)}\ln (\varphi_n/\varphi).\]

Acknowledgments

The author is really grateful to the editor and the anonymous reviewers for their constructive comments. He would also like to thank Kokou Essiomle, Tchilabalo E. Patchali and Essodina Takouda for their help during the preparation of the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References:

  1. Konzou, E., Koudou, E., & Gneyou, K. E. (2020). Rate of convergence of generalized inverse Gaussian and Kummer distributions to the gamma distribution via Stein’s method. Statistics and Probability Letters, 159, 108683. [Google Scholor]
  2. Hamza, M., & Vallois, P. (2016). On Kummer’s distribution of type two and a generalized beta distribution. Statistics and Probability Letters, 118, 60-69. [Google Scholor]
  3. Jørgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Springer-Verlag, Heidelberg. [Google Scholor]
  4. Konzou, E., & Koudou A. E. (2020). About the Stein equation for the generalized inverse gaussian and Kummer distribution. ESAIM: Probability and Statistics, 24, 607-626. [Google Scholor]
  5. Piliszek, A., & Wesołowski, J. (2018). Change of measure technique in characterizations of the gamma and kummer distributions. Journal of Mathematical Analysis and Applications, 458(2), 96-979. [Google Scholor]
  6. Chhikara, R.S., & Folks, J.L. (1989). The Inverse Gaussian distribution theory, methodology and application. Marcel Dekker, New York. VIII, 213 p. [Google Scholor]
  7. Seshadri, V. (1999). The inverse Gaussian distribution. Statistical theory and applications. Lecture Notes in Statistics, 137. Springer-Verlag, New York. XII+347 pp. ISBN: 0-387-98618-9. [Google Scholor]
  8. Bhattacharya, R. N., & Waymire, E. C. (1990). Stochastic processes with applications. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1990. XVI+672 pp. ISBN: 0-471-84272-9. [Google Scholor]
  9. Gaunt, R. (2014). Inequalities for modified Bessel functions and their integrals. Journal of Mathematical Analysis and Applications, 420(1), 373-386. [Google Scholor]
  10. Olver, F.W.J., Lozier, D.W., Boisvert, R.F., & Clark, C.W., (2010). NIST Handbook of Mathematical Functions.Cambridge University Press.