Risk evaluation in information systems using continuous and discrete distribution laws

Author(s): Ajit Singh1, Amrita Prakash1
1Department of Computer Science, Patna Women’s College Bihar, India.
Copyright © Ajit Singh, Amrita Prakash. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The paper construct continuous and discrete distribution laws, used to assess risks in information systems. Generalized expressions for continuous distribution laws with maximum entropy are obtained. It is shown that, in the general case, the entropy also depends on the type of moments used to determine the numerical characteristics of the distribution law. Also, probabilistic model have been developed to analyze the sequence of independent trials with three outcomes. Expressions for their basic numerical characteristics are obtained, as well as for calculating the probabilities of occurrence of the corresponding events.

Keywords: Information system, distribution, risk, random variable.

1. Introduction

At the present stage of the development of society, which is characterized by the intensive introduction of information systems in virtually areas of activity, issues related to the assessment of the risks that occur during their operation are of particular importance. When analyzing and assessing risks, issues related to the definition of distribution laws are of the greatest importance. The given work is devoted to the construction of distribution laws.

In the modeling of information systems, risk is a random variable and is described by a probability distribution on a given set [1, 2, 3]. In contrast to experiments conducted in physics, where the possibility of their multiple conduct, the conditions of the functioning of information systems are characterized by a constant impact of negative external influences and are constantly changing [4], and consequently the repetition of the experiment under the same conditions is practically impracticable. The laws of probability distribution of risk events, as a rule, do not correspond to the law of the normal Gaussian distribution [5, 6].

2. Construction of continuous distribution laws with the maximum entropy

Entropy coefficient is often used [7, 8] with the classification of distribution laws of random continuous value (RV) with number characteristics.
\begin{align}\label{equ1} \delta_e = \frac{1}{2 \sigma} exp(H). \end{align}
(1)
In the formula (1), \(\sigma = \sqrt{\mu_2}\) is standard deviation, and \(\mu_2\) is the second central power moment for this distribution law; value \(H\) is the entropy, which is defined as:
\begin{align}\label{equ2} H = -\int_{- \infty}^{\infty} p(x) ln(p(x)) dx \end{align}
(2)
where \(p(x)\) is the density of probability distribution (PDD) SV. Entropy coefficient has the maximum value for Gaussian law is \( \delta_e =2.066\); for uniform law is \(\delta_e =1.73\) and for Koshi distribution is \(\delta_e = 0\) etc.
The entropy value does not depend on shift parameter, to simple computation let’s consider, that it is equal to zero. Firstly we need to find distribution law from unilateral laws of distribution of unlimited RV, for which entropy value (2) reaches the maximum with the following limitations imposed on probability density \(p(x)\):
\begin{align}\label{equ3} p(x) \ge 0, \int_{0 }^{\infty} p(x) dx = 1, ~ \int_{0}^{\infty} x^\nu p(x) dx = \frac{\beta^\nu}{\nu}, \end{align}
(3)
where \(\beta\) is scale parameter and \(\nu\) is value of maximum existing primary direct moment. Here and next we’ll consider positive power moment as a direct moment in accordance with (3) and negative power moment as a reverse moment. To find the extremum we’ll use the method of indefinite Lagrange multipliers [9]. We need to maximize
\begin{align}\label{equ4} \int_{0}^{\infty} \bigg[ -p(x) ln (p(x)) + \lambda_1 p(x) + \lambda_2 x^\nu p(x) \bigg] dx \end{align}
(4)
by inserting Lagrange multipliers \(\lambda_1\) and \(\lambda_2\) and considering the limitations (3). Equating the result of variation integrand expression in (4) when \(p(x) = 0\), we’ll take the equation relatively to \(p(x)\) :
\begin{align}\label{equ5} -ln (p(x)) – 1 + \lambda_1 + \lambda_2 x^\nu = 0 \end{align}
(5)
So, the density \(p(x)\) which satisfy (3) and maximizes H can be found from the equation (5):
\begin{align}\label{equ6} p(x) = exp (\lambda_1 -1 + \lambda_2 x^\nu). \end{align}
(6)
By substituting (6) in (3) and integrating, we have
\begin{align}\label{equ7} exp(\lambda_1) \frac{\Gamma (1/\nu)}{\nu(-\lambda_2)^{1/\nu}} = 1;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, exp(\lambda_1) \frac{\Gamma (1/\nu)}{\nu(-\lambda_2)^{1+ 1/\nu}} = \frac{\beta^\nu}{\nu}. \end{align}
(7)
From (7), we find that \(\lambda_2 = \dfrac{-1}{\beta^\nu}\) and ~\(exp(\lambda_1 – 1) = \dfrac{v}{\beta \Gamma (\tfrac{1}{\nu})}\). Consequently
\begin{align}\label{equ8} p(x) = \frac{\nu}{\beta \Gamma (1/\nu)} exp \biggl( \frac{-x^\nu}{\beta^\nu} \biggr) \end{align}
(8)
where \(\bar{A} (z) \) is gamma function. From (8), it follows that if only the first beginning direct moment \(\nu = 1\) exists than exponential law has the maximum entropy; if there are two moments \((\nu = 2 )\) then unilateral Gaussian law and if all direct moments exist \((\nu \rightarrow \infty )\) than unilateral uniform law. Indeed, the limiting moment (8) with \((\nu \rightarrow \infty )\) is a unilateral uniform law \(p(x) = \beta^-1,~ 0 < x < \beta\). So, if all direct moments exist then uniform law has the maximum entropy from unilateral distribution laws of RV. Analogically, for bilateral symmetry laws of distribution of RV, it can be shown that if the first \(\nu\) of absolute central direct moments then the probability density has the maximum entropy:
\begin{align}\label{equ9} p(x) = \frac{0.5 \nu}{\beta \Gamma (1/\nu)} exp\biggl( \frac{-|x|^\nu}{\beta} \biggr), ~ -\infty < x < \infty. \end{align}
(9)

From (9), it follows that if only first absolute central moment exists \(( \nu = 1) \) then Laplace distribution has the biggest entropy; if there are two moments \(( \nu = 2)\) then Gaussian law and if all direct moments exist \((\nu \rightarrow \infty )\) then uniform law. Indeed, the limiting case for (9) is a uniform law \(p(x) = 0.5 \beta^-1,~ -\beta < x < \beta \). So, if all direct moments exist then uniform law has the biggest entropy from bilateral symmetry distribution laws of RV. Considered private cases of bilateral laws with the maximum entropy coincide with already known laws (Laplace and Gaussian) which have maximum entropy that confirms the correctness of received results.

From analysis of the received expressions (8) and (9), it follows that for increasing the amount of information about evaluating parameters of distribution laws with big length (with long “tails”) with the help of a method of moments is necessary to use direct moments of lesser order, including fractional order. If the parameters of distribution laws with lesser length are used then it is necessary to use direct moment of higher order.

Let’s find from unilateral distribution laws of unlimited RV distribution law with which entropy value H reaches maximum with the following limitations imposed on probability density \(p(x)\):
\begin{align}\label{equ10} p(0) = &0, \,\,\,\,\,~ p(x) \ge 0,\,\,\,\,\,\, \int_{0}^{\infty} p(x) dx = 1,\,\,\,\,\,\, ~ \int_{0}^{\infty} x^{-\nu} p(x) dx = \beta^\mu/\nu \end{align}
(10)
where \(\nu \) is value of maximum existing beginning reverse moment. Considering the entropy is defined by an expression:
\begin{align} \label{equ11} H = – \int_{0}^{\infty} y^{-2} p(1/y) ln (y^{-2} p(1/y)) dy = – \int_{0}^{\infty} p(x) ln(x^2 p(x))dx. \end{align}
(11)
where \(y^{-2} p(\frac{1}{y})\) is a probability density RV \(\eta\) which is reverse to \(\xi\) and has the probability density \(p(x)\). As a result of using the method of indefinite Lagrange numerators, we’ll receive following expression for distribution law with the maximum entropy:
\begin{align}\label{equ12} p(x) = \frac{\nu exp(x)}{\beta \Gamma (1/\nu)} exp\biggl( \frac{-exp(\nu x)}{\beta^\nu} \biggr). \end{align}
(12)
The limiting case for (12) with \(\nu \rightarrow \infty\) (all reverse moments exist) is a unilateral distribution law of limitations down from RV \(p(x) = \frac{1}{\beta} x^2, \frac{1}{\beta} < x < \infty\).
Let’s define the bilateral distribution laws of RV for which entropy value \(H\) reaches the maximum with the following limitations imposed on probability density \(p(x)\)
\begin{align}\label{equ13} p(x) \ge 0, \,\,\,\,\, \int_{-\infty}^{\infty} p(x) dx = 1,\,\,\,\,\,\,\int_{-\infty}^{\infty} exp(\nu x) p(x) dx = \beta^\nu/\nu, \end{align}
(13)
where \(\nu\) is the value of maximum existing primary direct exponential moment. Considering the entropy \(H\) is defined by the expression;
\begin{align}\label{equ14} H = – \int_{-\infty}^{\infty} p(x) ln (exp (-x)p(x)) dx. \end{align}
(14)
By using the method of indefinite Lagrange numerators we’ll receive the following expression for distribution law with the maximum entropy;
\begin{align}\label{equ15} p(x) = \frac{\nu exp(x)}{\beta \Gamma (1/\nu)} exp \biggl( \frac{- exp (\nu x)}{ \beta^\nu} \biggr), ~ -\infty < x < \infty. \end{align}
(15)
The limiting case for (15) when \(\nu \rightarrow \infty\) (all direct exponential moments exist) is a distribution law of bordered above RV \(p(x) = \frac{exp(x)}{\beta}, -\infty < x < ln(\beta)\). Now let's find such distribution law from bilateral distribution laws of unlimited RV for which the value of entropy H reaches maximum with the following limitations imposed on probability density \(p(x)\):
\begin{align}\label{equ16} p(x) \ge 0, ~\,\,\,\,\,\, \int_{-\infty}^{\infty} p(x) dx = 1 \,\,\,\,\,\,\,\, \int_{-\infty}^{\infty} exp(- \nu x) p(x) dx = \frac{\beta^\nu}{\nu}, \end{align}
(16)
where \(\nu\) is the value of maximum existing primary reverse exponential moment. Considering an entropy \(H\) is defined by the expression:
\begin{align}\label{equ17} H = – \int_{-\infty}^{\infty} p(x) ln (exp (x)p(x)) dx. \end{align}
(17)
As a result of using the method of indefinite Lagrange numerators, we’ll receive the following expression for distribution law with the maximum entropy:
\begin{align}\label{equ18} p(x) = \frac{\nu~ exp(-x)}{\beta \Gamma (1/\nu)} exp \biggl( \frac{- exp (-\nu x)}{ \beta^\nu} \biggr), ~ -\infty < x < \infty. \end{align}
(18)
The limiting case for (18) when \(\nu \rightarrow \infty\) (all direct exponential moments exist) is a distribution law of bordered above RV \(p(x) = \frac{exp(-x)}{\beta}, -ln(\beta) < x< \infty \).
From the analysis of expressions (15) and (18), it follows that exponential transformation of RV leads to transformation of form parameter \(\nu\) in scale parameter and \(\beta\) parameter in shift parameter.
Finally let’s define such distribution law from unilateral distribution laws of unlimited RV for which the value of entropy H reaches maximum with the following limitations imposed on probability density \(p(x)\):
\begin{align}\label{equ19} p (0) =& 0,\,\,\,\, ~ p(x) \ge 0, \,\,\,\,\, \int_{0}^{\infty} p(x) dx = 1, \,\,\,\,\,\,\int_{0}^{\infty} |ln(x)|^\nu p(x)dx =\frac{\beta^\nu}{\nu}, \end{align}
(19)
where \(\nu\) is the value of maximum existing primary direct logarithmic moment. Considering an entropy \(H\) is defined by the expression:
\begin{align}\label{equ20} H = – \int_{0}^{\infty} p(x) ln (xp(x)) dx. \end{align}
(20)
As a result of using the method of indefinite Lagrange numerators, we’ll receive the following expression for distribution law with the maximum entropy:
\begin{align}\label{equ21} p(x) = \frac{\nu }{2 \beta \Gamma (1/\nu)x} exp \biggl( \frac{- |ln (x)|^\nu}{ \beta^\nu} \biggr), ~ 0 < x < \infty. \end{align}
(21)
From (21), it follows that if only two absolute logarithmic moments exist (\(\nu = 2\) ) then logarithmic normal law has the biggest entropy. If \(\nu \rightarrow \infty\) (all absolute primary moments exist) then (21) is transforming in Shannon law for limitations from above and down of RV \(p(x) = 0.5/\beta x,~ exp(-\beta) < x < exp(\beta)\). It is necessary to notice that with logarithmic transformation of RV scale parameter transforms in form parameter and shift parameter transforms in scale parameter.
In general case, if RV \(\eta\) connected with RV \(\eta \square\) by a ratio \(y =f(x)\) and known PDD \(p(y)\) of continuous RV \(\xi\) , then PDD \(p(x)\) can be found by a method of functional transformation with the help of expression:
\begin{align}\label{equ22} p(x) = p(y) . \left|\frac{dy}{dx}\right|. \end{align}
(22)
Considering (22), the entropy \begin{align*} H = – \int_{\Omega} p(y) ln (p(y)) dy \end{align*} takes the form
\begin{align}\label{equ23} H = – \int_{\Omega} p(x) ln (q(x). p(x)) dx \end{align}
(23)
where \(q(x) = \left|\dfrac{dy}{dx}\right|^{-1}\) and \(\Omega\) is the areas of existence RV \(\eta\) and \(\xi\) respectively.

3. Distributions arising in the analysis of the sequence of independent tests with three outputs

Next, consider the development of a probabilistic model of a sequence of independent trials with three outcomes which becomes particularly important in the formation of estimates of the information security of information processing systems [10].
During the test, it is taken into account that its result is either event A or the opposite event C. The probability of event A in any test is independent of the outcomes of all other tests (the tests are independent) and equal to the probability (this is ensured by the same set of conditions for each test). This scheme of tests was first considered by J. Bernoulli and bears his name [11, 12, 13, 14]. The probability \(P_A(k)\) of the fact that event \(A\) in \(N\) tests will come precisely \(k\) times (\(k = 1,2 , \dots , N\)) is defined by Bernoulli’s formula [13, 14, 15]:
\begin{align}\label{equ24} P_A(k ) = \frac{N!}{(N -k)! k!} p^k (1 – p)^{N- k}, \end{align}
(24)
which represents binomial distribution. For \(N = 1\), it transforms to Bernoulli’s distribution.
\begin{align}\label{equ25} P_A(k ) = p^k (1- p)^{1- k}. \end{align}
(25)
The limiting case of binomial distribution when \(p \rightarrow 0\) and \(N \rightarrow \infty\) and product \(Np\) aims to some positive constant value \(\lambda\) (i.e., \(Np \rightarrow \infty \)) is Poisson’s distribution [13, 14, 15].
\begin{align}\label{equ26} P(k) = \frac{\lambda^k}{k!} exp (-\lambda), ~ 0 \le k < \infty. \end{align}
(26)
If sequence of tests with Bernoulli’s scheme continues to appear m “failures” then the number of successes \(k\) obeys to negative binomial distribution
\begin{align}\label{equ27} P(k) = \frac{\Gamma (m + k)}{\Gamma (m) k!} p^k (1 – p)^m, ~ 0 \le k < \infty \end{align}
(27)
where \(\Gamma(m)\) is the gamma function. Main purpose of this work is to invent sequence probability model of independent tests with three outputs and with it’s help receive formulas analogue to (24), (26) and (27) for defining the probabilities of coming coinciding events. Let it be produced \(N\) of independent tests. Every test can end with three outputs: either event \(A\) with the probability \(p_1\) will come, or event \(B\) with the probability \(p_2\) will come, or event \(C\) with the probability \((1 – p_1 – p_2)\) will come. Let’s match random discrete value to random output of every test which takes three values: -1 if event A happened; 0 if event \(C\) happened and 1 if event \(B\) happened. Positive or negative output of every test we’ll consider as a “success” and zero output – “failure”. In this the probability of coming events A, C and B in every test can be found by an expression
\begin{align}\label{equ28} P(k) = \begin{cases} p_1, & k = -1; \\ 1- p_1 – p_2, & k = 0; \\ p_2, & k = 1; \end{cases} \end{align}
(28)
where \( 0 < p_1 < 1, 0 < p_2 < 1, p_1 + p_2 < 1\). This distribution of probabilities, analogically to Bernoulli's distribution (25), can be called bilateral Bernoulli's distribution. Let's find characteristic function for distribution (28), using ratio [15]
\begin{align}\label{equ29} \theta(j \vartheta) = \sum_{k = -1}^{1} exp(j \vartheta k) P(k). \end{align}
(29)
Using (28), we’ll get
\begin{align}\label{equ30} \theta (j \vartheta) = p_1 exp(-j \vartheta) + (1- p_1 – p_2) + exp(j \vartheta). \end{align}
(30)
Since ongoing tests are independent so characteristic function \(\theta_N (j, \vartheta)\) of distribution laws \(P(k)\) in \(N\) tests will be equal to expression:
\begin{align}\label{equ31} \theta_N (j \vartheta) = \theta(j \vartheta)^N = [p_1 exp(-j \vartheta) + (1- p_1 – p_2) + exp(j \vartheta)]^N. \end{align}
(31)
In this probability distribution \(P(k)\) in \(N\) tests can be found by the formula:
\begin{align}\label{equ32} P(k) = \frac{1}{2 \pi} \int_{- \pi}^{\pi} \Omega (j \vartheta)^N exp (- j \vartheta k ) d \vartheta , ~ -(N -1) , \dots , N \end{align}
(32)
Let’s find obvious expression for probability distribution \(P(k)\) in \(N\) tests by substituting (31) in (32) and integrating
\begin{align}\label{equ33} P(k) = (1 – p_1 -p_2) ^N\times \biggl( \sqrt{\frac{p_2}{p_1}} \biggr)^k \sum_{i = |k|}^{N} \frac{N!}{(N – i)!} \times B (i, k ) \biggl( \frac{\sqrt{p_1 p_2}}{1 – p_1 – p_2} \biggr)^i \end{align}
(33)
where \(B(i, k ) = \frac{0.5 (1+ (-1)^{ + |k|})}{ \Gamma (0.5(i – k) + 1 ) \eta (0.5 (i + k) + 1) }\). Expression (10) can be simplified for five private cases:
  1. If \( p_1 = p_2 = p < 0.5\), then
    \begin{align}\label{equ34} P(k) = (1 – 2p) ^N \times \sum_{i = |k|}^{N} \frac{N!}{(N – i)!} \times \biggl(\frac{p}{1 – 2p} \biggr)^i \times \frac{0.5[1+ (-1)^{i + |k|}]}{\Gamma [0.5 (i + k) + 1] \Gamma [0.5 (i -k) + 1]} \end{align}
    (34)
    \item If \(p_1 = (1 – p)^2, p_2 = p^2\), then
    \begin{align}\label{equ35} P(k) = \frac{(2N)!}{(N – k)! (N+ k)!} \times p^(N + k) (1 – p)^{(N – k)}, ~ k = -N, ~ -(N – 1), \dots , N \end{align}
    (35)
    Probability distribution (35), just like distribution (24), is a binomial distribution with not-zero shift parameter.
  2. Let’s view limiting case for distribution (33), when probability of coming value \(C\) is aims to zero, i.e., \((p_1 + p_2) \rightarrow 1\). In this case every test will end in two outputs: either coming of event \(A\) with the probability \((1 p)\), or event \(B\) with the probability \(p\). Those outputs can be matched discrete random value, which takes two values: -1, if event \(A\) happened and 1, if event \(B\) happened. In this probability distribution (33), the result can be transformed to distribution:
    \begin{align}\label{equ36} P(k) = (0.5 N! [1 + (-1)^{N + |k|}]) \times (\Gamma [0.5 (N + k ) + 1] \Gamma [0.5 (N – k ) + 1 ]^-1) \times \biggl( \frac{p}{1 – p} \biggr)^{0.5k} (p (1 – p))^{0.5N} \end{align}
    (36)
  3. Let’s view the second limiting case for distribution (33), when probability of coming event \(A\) aims to zero, i.e. \(p_1 \rightarrow 0\). In this case every test will end in two outputs: either coming of event \(C\) with a probability \((i – p)\), or event \(B\) with a probability \(p\). Those outputs can be matched random discrete value, which takes two values: 0, if event \(C\) happened and 1, if event \(B\) happened. This probability distribution (33) as a result of limiting transition transforms is the binomial distribution (24) and that’s why received probability distribution (33) can be called generalized Bernoulli’s formula, or bilateral binomial distribution.
  4. Let’s view the third limiting case for distribution (33), when \(p_1 \rightarrow 0, ~ p_2 \rightarrow 0, ~ N \rightarrow \infty\), and products \(Np_1, ~ Np_2\) aim to some positive constant values \(\lambda_1\), \(\lambda_2\) (i.e. \(Np_1 \rightarrow \lambda_1, ~ Np_2 \rightarrow \lambda_2 \) ). This probability distribution (33) in result of limiting transition transforms is the probability distribution either
    \begin{align}\label{equ37} P(k) = exp ( -\lambda_1 – \lambda_2) \biggl( \sqrt{\frac{\lambda_2}{\lambda_1}} \biggr)^k \times \sum_{i = |k|}^{\infty} \frac{0.5[1+ (-1)^{i + |k|}] \sqrt{\lambda_1 \lambda_2}^i}{\Gamma [0.5 (i + k) + 1] \Gamma [0.5 (i -k) + 1]} \end{align}
    (37)
    or
    \begin{align}\label{equ38} P(k) = exp ( -\lambda_1 – \lambda_2) \times \biggl( \sqrt{\frac{\lambda_2}{\lambda_1}}\biggr)^k I_{|k|} (2 \sqrt{\lambda_1 \lambda_2}), ~ -\infty < k < \infty \end{align}
    (38)
    where \(I_\nu(z)\) is the modified Bessel’s function.
If parameter \(\lambda_1 \rightarrow 0\), and parameter \(\lambda_2 \rightarrow \lambda\), then distribution (37) or (38) transforms in Poisson’s distribution (26). That’s why probability distribution (37) or (38) can be called bilateral Poisson’s distribution. Characteristic function for it is;
\begin{align}\label{equ39} \theta(j \vartheta) = exp [- (\lambda_1 + \lambda_2) + \lambda_1 exp ( -j \vartheta) + \lambda_2 exp (j \vartheta) ]. \end{align}
(39)
The first, second, third and fourth order for distribution (33) can be found from the expressions \begin{eqnarray} m_ 1 &=& N (p_2 -p_1); \\ \notag \end{eqnarray}
\begin{eqnarray} M_2 &=& N [ p_2 + P_1 – (p_2 -p_1)^2]\label{equ40}\\ \end{eqnarray}
(40)
\begin{eqnarray} M_3 &=& (p_2 – p_1) \times [N -N (p_2 – p_1)^2 – 3M_2]\label{equ41}\\ \end{eqnarray}
(41)
\begin{eqnarray} M_4 &=& M_2 [1 + 6 (p_2 -p_1)^2] + 3(1 – \frac{1}{N})M^2_2 + 3N (p_2 – p_1)^2 [ (p_2 -p_1)^2 – 1].\label{equ42} \end{eqnarray}
(42)
To compute these moments, we need asymmetry coefficient \(K_a\) and excess coefficient \(K_e\), which are given as;
\begin{align}\label{equ43} K_a = \frac{M_3}{M_2^{1.5}}; \,\,\,\,\,\,\, k_e = \frac{M_4}{M_2^2} – 3. \end{align}
(43)
Expressions (40), (41) and (42) for moments are significantly simplified for private distribution cases (33). So, for distribution (33), we have:
\begin{align}\label{equ44} m_1 = 0, \,\,\,\,\,\,\, M_2 = 2N p,\,\,\,\,\,\,\, M_3 = 0,\,\,\,\,\,\,\, M_4 = M_2 + 3 ( 1- \frac{1}{N}) M^2_2. \end{align}
(44)
In this case
\begin{align}\label{equ45} K_a = 0; ~ k_e = \frac{0.5 – 3p}{pN} \end{align}
(45)
For distribution (45), we have: \begin{eqnarray} m_1 &= & N (2p -1); \\ \notag M_2 &= &2N p( 1- p); \\ \notag \end{eqnarray}
\begin{eqnarray} M_3 &=& 2N p( 1- p) (1 – 2p);\label{equ46} \end{eqnarray}
(46)
\begin{eqnarray} M_4 &=& 2N p (1 -p ) \times [1 + 6p(1- p )(N – 1)].\label{equ47} \end{eqnarray}
(47)
where
\begin{align}\label{equ48} K_e = \frac{1 -2p}{\sqrt{2Np (1 – p)}}, \,\,\,\,\,\,\,\,\,\, K_e =& \frac{1 – 6p(1 – p)}{2N p (1 -p)}. \end{align}
(48)
For distribution (36), we have
\begin{eqnarray} m_1 &= &N (2p -1), \notag\\ M_2 &= & 4N p (1 – p), \notag\\ M_3 &= & 8N p (1 – p ) ( 1- 2p),\label{equ49} \end{eqnarray}
(49)
\begin{eqnarray} M_4 &= &3M^2_2 + 4M_2 [ 1 + 6p (1 – p)].\label{equ50} \end{eqnarray}
(50)
where
\begin{align}\label{equ51} K_e = \frac{1 -2p}{\sqrt{Np (1 – p)}}, \,\,\,\,\,\,\,\,\,\, K_e = \frac{1 + 6p(1 – p)}{N p (1 -p)}. \end{align}
(51)
For expression (37) or (38), we have
\begin{align}\label{equ52} m_1 = \lambda_2 – \lambda_1,\,\,\,\,\,\,\,\,\,\,\,\, M_2 = \lambda_1 + \lambda_2,\,\,\,\,\,\,\,\,\,\,\,\, M_3 = \lambda_2 – \lambda,\,\,\,\,\,\,\,\,\,\,\,\, M_4 = \lambda_1 + \lambda_2 + 3M^2_2. \end{align}
(52)
where
\begin{align}\label{equ53} K_a = \frac{\lambda_2 – \lambda_1}{(\lambda_1 + \lambda_2)^{1.5}},\,\,\,\,\,\,\,\,\,\,\,\, K_e = \frac{1}{\lambda_1 + \lambda_2}. \end{align}
(53)
Probability \(P_B(k)\) of fact, that event \(B\) in \(N\) tests will come \(k\) times can be found from formula (33), or from it’s private cases (34), (35), (36), (37) or (38). In this we suppose that \(P_B(k) = P(k), k = 1, 2, \dots , N\).
Probability \(P_A(k)\) of fact, that event \(A\) in \(N\) tests will come \(k\) times can be also found from formula (33), or it’s private cases (34), (35), (36), (37) or (38). In this we suppose that \(P_A(k) = P(k), k = -1, -2, \dots , -N\).
Probability \(P_C\) of coming event \(C\) in \(N\) tests can be found using formula (33), or it’s private cases (35), (36), (37) or (38). In that we suppose, that \(P_C = P(0)\). Probability \(P_C\) matches to probability of fact, that in \(N\) cases events \(A\) and \(B\) won’t come.
Let’s view the example. Two symmetric coins are being thrown for ten rimes. In every throw three outputs are possible: two “eagles” with probability 0.25; two “tails of coin” with probability 0.25 and “eagle and tail of coin” with probability 0.5. It’s necessary to find: 1) probability of fact, that precisely five times two “eagles” drop; 2) probability \(P_{tt}\) of fact, that precisely three times two “tails of coin” drop; 3) probability \(P_{et}\) of fact, that precisely five times two “eagles” and three “tales of coin” drop. In the match with example’s condition we have \begin{align*} p_1 = p_2 = p = 0.25,\,\,\,\,\,\,\,\,\,\,\,\, N = 10; ~ p_{ee} = P_A (-5),\,\,\,\,\,\,\,\,\,\,\,\, P_{tt} = P_B (3),\,\,\,\,\,\,\,\,\,\,\,\, P_{et } = P_A (-5) P_B (3). \end{align*} i.e., \(p_1 = p_2\), then we use expression (9) as a counting formula. With it’s help we find, that either
\begin{align}\label{equ54} P_{ee} \approx 0.015,\,\,\,\,\,\,\,\,\,\,\,\, P_{tt} \approx 0.075,\,\,\,\,\,\,\,\,\,\,\,\, p_{et} \approx 1.093 \times 10^{-3}, \end{align}
(54)
or
\begin{align}\label{equ55} P(k) = ( 1- p_1 – p_2)^m \biggl( \sqrt{\frac{p_2}{p_1}} \biggr)^k \times \biggl( \sqrt{p_1 p_2} \biggr)^{|k|} \frac{\Gamma (m + |k|)}{\Gamma (m)} F(k), ~ -\infty < k < \infty, \end{align}
(55)
where \(F(k) = F_1 (0.5 ( m + |k|)),~ 0.5 ( m+ |k| + 1), ~ 1+ |k|, ~ 4 p_1 p_2 \) is the Hypergeometric Gaussian function. Characteristic function of distribution (54) or (55) has the form
\begin{align}\label{equ56} \theta ( j \vartheta) = [ (1 – p_1 – p_2) \times ( 1- p_1 exp( -j \vartheta) – p_2 ( j \vartheta)) ^ {-1} ]^m. \end{align}
(56)
Primary moment of the first order and central moments of the second, the third and the fourth orders for expressions (54) or (55) are defined by expressions \begin{eqnarray} m_1 &=& \frac{m( p_2 – p_1)}{1 – p_1 – p_2},\notag \end{eqnarray}
\begin{eqnarray} M_2 &= &\frac{m ( p_2 + p_1 – 4 p_1 p_2)}{(1 – p_1 – p_2)^2},\label{equ57},\\ \end{eqnarray}
(57)
\begin{eqnarray} M_3 &=& \frac{m (p_2 -p_1)}{(1 -p_1 – p_2)^3} \times ( 1 + p_2 + p_1 -8 p_1 p_2),\label{equ58} \end{eqnarray}
(58)
\begin{eqnarray} M_4 &=& m \biggl[ \frac{6 (p_2 – p_1)^4}{(1 – p_1 -p_2)^4} + \biggl( \frac{4 (p_2 – p_1)^2}{(1 – p_1 -p_2)^3} + \frac{(p_1 + p_2)}{(1 – p_1 – p_2)^2} \biggr) \times (2 p_1 + 2 p_2 + 1) \biggr] + 3M_2^2.\label{equ59} \end{eqnarray}
(59)
Let’s view limiting case for distribution (31) or (32), when probability \(p_1 \rightarrow 0\), and probability \(p_2 = p\). In the probability distribution (31) or (32) as a result of limiting transaction transforms in negative binomial distribution (4). That’s why received probability distribution (31) or (32) can be called bilateral negative binomial distribution.
Choosing from bilateral binomial, Poisson’s and negative binomial distributions, we can use following properties of those distributions: Binomial – \(K_a M_2 1\).
So, there was developed probability model for sequence of independent tests with three outputs, were received expressions for it’s general number characteristics, and also for calculating the probabilities of coming matched events precisely k times. It was shown, that limiting cases of received bilateral distributions are binomial, negative binomial and Poisson’s distributions.

4. Conclusion

The following results are obtained in this paper
  • Generalized expressions for one-way and two-way continuous distribution laws with maximum entropy depending on the number of existing power, exponential or logarithmic moments. With their help, one can more reasonably choose the a priori distribution under the conditions of a priori uncertainty in the analysis of the risks of information systems. From the analysis of expression (23) and its particular cases (2), (11), (14), (17), (20) at the appropriate values \(q(x)\) it follows that in the general case the entropy depends also on the type of moments used to determine the numerical characteristics of the distribution law.
  • Probabilistic model for a sequence of independent trials with three outcomes, which acquire special significance in the formation of information security assessments of information systems. Expressions for its basic numerical characteristics are obtained. It is shown that the limiting cases of the obtained two-way distributions are the binomial, negative binomial and Poisson distributions.

Acknowledgments

The authors would like to express their gratitude to University of Lagos for providing the enabling environment to conduct this research work.

References:

  1. Burkov, V. N., Novikov, D. A., & Shchepkin, A. V. (2015). Control mechanisms for ecological-economic systems. Springer International Publishing.[Google Scholor]
  2. Kornilov, S.N., & Kornilova, M.M. (2017). Information processing technique for analyzing the operation of freight stations of Russian Railways. Modern Problems of the Transport Complex of Russia , 4(1), 49-52.[Google Scholor]
  3. Broderick, J. S. (2006). ISMS, security standards and security regulations. Information Security Technical Report, 11(1), 26-31.[Google Scholor]
  4. Otten, K., & Debons, A. (1970). Towards a metascience of information: Informatology. Journal of the American Society for Information Science, 21(1), 89-94.[Google Scholor]
  5. Kulba, V., Bakhtadze, N., Zaikin, O., Shelkov, A., & Chernov, I. (2017). Scenario analysis of management processes in the prevention and the elimination of consequences of man-made disasters. Procedia Computer Science, 112, 2066-2075.[Google Scholor]
  6. Schulz, V. L., Kul’ba, V. V., Shelkov, A. B., & Chernov, I. V. (2013). Methods of scenario analysis of threats to the effective functioning of systems of organizational management. Trends and Management, 1(1), 6-30.[Google Scholor]
  7. Gromov, N., & Kazakov, V. (2012). Review of AdS/CFT integrability, chapter III. 7: Hirota dynamics for quantum integrability. Letters in Mathematical Physics, 99(1-3), 321-347.[Google Scholor]
  8. Moustafa, N., & Slay, J. (2016). The evaluation of Network Anomaly Detection Systems: Statistical analysis of the UNSW-NB15 data set and the comparison with the KDD99 data set. Information Security Journal: A Global Perspective, 25(1-3), 18-31.[Google Scholor]
  9. Jones, M. R., & Karsten, H. (2008). Giddens’s structuration theory and information systems research. MIS quarterly, 32(1), 127-157.[Google Scholor]
  10. Gromov, Y. Y., Karpov, I. G., Minin, Y. V., & Ivanova, O. G. (2016). Generalized probabilistic description of homogeneous flows of events for solving informational security problems. Journal of Theoretical & Applied Information Technology, 87(2), 250-254.[Google Scholor]
  11. Chung, K. L., & Zhong, K. (2001). A course in probability theory. Academic press.[Google Scholor]
  12. Levin, B. R. (1989). Theoretical bases of statistical radio engineering. Moscow: Radio and telecommunication.[Google Scholor]
  13. Prokhorov, Y. V. (1999). Probability and Mathematical Statistics (Encyclopedia). Moscow: Bolshaya Rossiyskaya encyclopaedia Publisher. [Google Scholor]
  14. Kai, T., & Tomita, K. (1980). Statistical mechanics of deterministic chaos: The case of one-dimensional discrete process. Progress of Theoretical Physics, 64(5), 1532-1550.[Google Scholor]
  15. Gardiner, C. W. (1985). Handbook of stochastic methods (Vol. 3, pp. 2-20). Berlin: springer. [Google Scholor]