An estimate of the rate of convergence of infinite matrices and their application to infinite series

Author(s): Suresh Kumar Sahani1, A.K. Thakur2, Avinash Kumar3, K. Sharma4
1Department of Science and Technology, Rajarshi Janak University, Janakpurdham, Nepal
2Department of Mathematics, G. G. V., Bilaspur, India
3Department of Mathematics, Dr. C. V. Raman University, India
4Department of Mathematics, NIT, Uttarakhand, Srinagar (Garhwal), India
Copyright © Suresh Kumar Sahani, A.K. Thakur, Avinash Kumar, K. Sharma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study introduces theorems concerning matrix products, which delineate the transformations of sequences or series into other sequences or series, ensuring either the preservation of limits or the guarantee of convergence. Previous literature has explored the properties of matrices facilitating transformations between sequences, series, and their combinations, with detailed insights available in references [1,2,3].

Keywords: Infinite series; matrices; convergence; sequence-to-sequence

1. Introduction and Preliminaries

In the early 1800s, the limitations of traditional convergence concepts became evident as numerous series failed to conform to ordinary convergence criteria (see [4,5]). The clarity in defining convergence of infinite series emerged with Cauchy’s landmark publication “Cours d’Analyse Algébrique” in 1821, complemented by Abel’s discovery regarding binomial series in 1826 (see [6]). However, alongside these advancements, several non-convergent series were identified, yielding nearly accurate results, particularly in dynamical astronomy. Cesàro’s 1890 study on exponential series development marked the advent of explicit theory in this domain (see [7]).

The late 19th century witnessed the emergence of rigorous summability notions, spurred by efforts to analyze series summations previously deemed divergent. This led to the establishment of summability analysis as a distinct mathematical discipline. With the recognition of convergence’s generalizability, attention naturally turned to exploring the generalizability of absolute convergence. Indeed, research confirms the affirmative response to this inquiry.

More specifically, the significance of absolute summability parallels that of convergence knowledge in shaping various summability methodologies ([8-17]). Similarly, one can conceptualize uniform summability as an extension of uniform convergence (see [16-22]).

The German word shorthands utilized in this work are as follows: \[\begin{aligned} \text{FF} &\quad \text{for sequence to sequence} \\ \text{RF} &\quad \text{for series to sequence} \\ \text{RR} &\quad \text{for series to series} \end{aligned}\]

Let \(P = \left( p_{ab} \right), \left( a, b = 1, 2, \ldots \right)\) be a given matrix, and consider the transformation:

\[\label{GrindEQ__4_} u_{n} = \sum\limits_{k=1}^{\infty} p_{ab} v_{k}\tag{1}\]

Matrix \(P\) facilitates FF, RF, or RR transformations to convert a sequence \(Z = \left\{ v_{b} \right\}\) into the sequence \(r = \left\{ 4_{a} \right\}\) or the series \(\sum\limits v_{a}\) into the series \(\sum\limits u_{a}\), provided each series (1) is convergent. Correspondingly, FF transformations can be adapted, with necessary adjustments, for RF and RR transformations. The concept of the endpoint of a sequence or series can be generalized using summability transformations, providing a mechanism to bound even divergent sequences. The optimal classification or selection of adjustment techniques is determined by two methods:

  1. Sequence-to-sequence transformations.

  2. Sequence-to-function transformations.

Sequence-to-sequence transformations employ infinite matrices. Given an infinite matrix \(C = \left( c_{\ell g} \right)\) and a sequence \(\left\{ S_{\ell} \right\}\), where \(\ell = 0, 1, 2, \ldots\), a new sequence \(\left\{ t_{\ell} \right\}\) is defined as:

\[t_{\ell} = \sum\limits_{g=0}^{\infty} c_{\ell g} s_{g}\]

We assume the series converges for all \(\ell\). If \(\left\{ t_{\ell} \right\}\) converges to \(t\), then \(t\) is termed the \(c\)-limit of \(\left\{ S_{\ell} \right\}\). When \(C\) does not alter the maximal convergence of sequences, it is not a regular summability transformation. A standard matrix representation can be employed for this purpose, with the Silverman-Toeplitz theorem providing necessary and sufficient conditions for its validity. The conditions for a matrix \(C = \left( c_{\ell g} \right)\) to satisfy this theorem are:

  1. \(\sum\limits_{g=0}^{\infty} \left| c_{\ell g} \right| < N\) for some \(N\), and \(\ell = 0, 1, 2, \ldots\)

  2. \(\lim_{\ell \to \infty} c_{\ell g} = 0\) for each \(g = 0, 1, 2, \ldots\)

  3. \(\lim_{\ell \to \infty} \sum\limits_{g=0}^{\infty} c_{\ell g} = 1\)

2. Main Theorems:

We present theorems on the product of matrices that characterize transformations between sequences and series, preserving limits and convergence. Building on the work of Dienes, Cooke, Hill, and Vermes ([3,23]), we extend Vermes’s earlier research ([2]) by exploring the pairwise products of matrices resulting from sequence-to-sequence, series-to-series, and series-to-sequence transformations. Our findings offer new insights into the properties of these transformations and their applications.

Remark 1. The following matrix classes arise from two widely used techniques for determining, by use of an infinite matrix, the generalized limit of a sequence and the generalized sum of a series, respectively:

A matrix \(P\) is called a K-matrix if it satisfies:

  1. \(\sum\limits_{m=1}^\infty |x_{rm}| \le E(P)\) for each \(r\),

  2. \(\lim_{r \to \infty} x_{rm} = x_m\) for each \(m\),

  3. \(\sum\limits_{m=1}^\infty x_{rm} \to x\) as \(r \to \infty\).

It follows that: \[\lim_{r \to \infty} x_{rm} = 0 \text{ for each } r \label{e1}\tag{2}\] \[\sum\limits_{m=1}^\infty |x_m| \le E(P). \label{e2}\tag{3}\] By hypothesis, Eq. (2) is immediate from condition 1 in the definition of a K-matrix. Additionally, from condition 1, \(\sum\limits_{m=1}^s x_{rm} \le E(P)\); hence by condition 2, \[\sum\limits_{m=1}^\infty |x_m| \le E(P)\] and holds for arbitrarily large \(r\), thereby proving Eq. (3).

Definition 1. An infinite matrix \(F \equiv (f_{\ell,g})\) is called a \(\gamma\)-matrix if it satisfies the following conditions (see [3]): \[\sum\limits_{g=1}^\infty |f_{\ell, g} – f_{\ell, g+1}| \le K, \quad \forall \ell \ge 1,\] \[f_{\ell, g} \to 1 \text{ as } \ell \to \infty \text{ for all } g.\]

Theorem 1. The elements of a \(\gamma\)-matrix are bounded.

Proof. From Definition 1, \[\begin{aligned} \left|f_{\ell ,g} \right|&=\left|f_{\ell ,g} -f_{\ell ,1} +f_{\ell ,1} \right|\\ &\underline{\le }\left|f_{\ell ,g} -f_{\ell ,1} \right|+\left|f_{\ell ,1} \right|\\ &\underline{\le }\, \, k+\left|f_{\ell ,1} \right|\le D. \end{aligned}\] ◻

Theorem 2. If \(F^{\left(j\right)}\) are \(\gamma\)-matrices and\(a=\sum\limits _{j=0}^{q}\varepsilon _{j} \ne 0\), then the matrix \(B\equiv \frac{1}{a} \sum\limits _{j=0}^{q}\varepsilon _{j} F^{\left(j\right)}\) is a \(\gamma\)-matrix.

Proof. Given the conditions stated in Definition 2, we begin by observing the series representation for \(t_{\ell, g}^{(j)}\): \[\sum\limits_{g=1}^{\infty} \left| t_{\ell, g}^{(j)} – t_{\ell, g+1}^{(j)} \right| \leq D_j.\] This inequality directly implies a bound for the difference between consecutive terms of the series \(t_{\ell, g}^{(j)}\).

Next, considering the sequence \(b_{\ell, g}\), we similarly obtain: \[\sum\limits_{g=1}^{\infty} \left| b_{\ell, g} – b_{\ell, g+1} \right| \leq \frac{1}{|a|} \sum\limits_{j=0}^\alpha |\varepsilon_j|.\] This bound reflects the influence of each coefficient \(\varepsilon_j\) on the differences between consecutive terms of the sequence \(b_{\ell, g}\).

For the series \(g_{\ell, g}^{(j)}\), the corresponding analysis yields: \[\sum\limits_{g=1}^{\infty} \left| g_{\ell, g}^{(j)} – g_{\ell, g+1}^{(j)} \right| \leq \frac{1}{|a|} \sum\limits_{j=0}^\alpha |\varepsilon_j| D_j.\] Here, the factor \(\frac{1}{|a|}\) normalizes the cumulative influence of the coefficients \(\varepsilon_j\) scaled by \(D_j\).

Therefore, in each of these cases, the summation conditions required by Definition 2 are satisfied. Consequently, it follows that the matrix \(B\) meets all the criteria set forth in Definition 2. ◻

Definition 2. The matrix B is as \(\alpha\)-mean \(\left(i.e.\, \lambda \, mean\right)\) of matrices\(F^{\left(j\right)}\).

Theorem 3. Consider a sequence of \(\gamma\)-matrices \(\{F^{(j)}\}\), where each \(F^{(j)} = (f_{\ell,g}^{(j)})\). The \(\lambda\)-mean of these matrices remains a \(\gamma\)-matrix under the following conditions:

  1. (a) For all indices \(j\), \(\ell\), and \(g\), the absolute value of each matrix element is bounded by a constant \(D\), i.e., \[\left|f_{\ell,g}^{(j)}\right| \leq D.\] Moreover, the sum of the absolute differences between consecutive elements in each row is uniformly bounded by a constant \(k\), i.e., \[\sum\limits_{g=1}^{\infty} \left|f_{\ell,g}^{(j)} – f_{\ell,g+1}^{(j)}\right| \leq k \quad \text{for all } j \text{ and } \ell.\]

  2. The series of coefficients \(\{a_j\}\) satisfies: \[\sum\limits_{j=0}^{\infty} \left|a_j\right| = E,\] where \(E\) exists and is finite. Additionally, the sum of the series \(\{\varepsilon_j\}\) equals a non-zero constant \(a\), i.e., \[\sum\limits_{j=0}^{\infty} \varepsilon_j = a \neq 0.\]

(b) Under these conditions, the \(\lambda\)-mean of the sequence of \(\gamma\)-matrices is itself a \(\gamma\)-matrix.

Proof. From Definition 2, consider the inequalities concerning the elements of matrices and series involved: \[\left|a\right| \left|b_{\ell,g}\right| \leq \sum\limits_{j=0}^{\infty} \left|\varepsilon_j\right| \left|f_{\ell,g}^{(j)}\right| \leq D E.\] This relationship asserts that the product of the absolute values of \(a\) and \(b_{\ell,g}\) is bounded by the product of constants \(D\) and \(E\), derived from the summation of the product series of \(\varepsilon_j\) and \(f_{\ell,g}^{(j)}\).

Additionally, we analyze the sum of the absolute differences of consecutive \(b_{\ell, g}\) terms: \[\left|a\right| \sum\limits_{g=1}^{\infty} \left|b_{\ell,g} – b_{\ell,g+1}\right| \leq \sum\limits_{j=0}^{\infty} \left|\varepsilon_j\right| \sum\limits_{g=1}^{\infty} \left|f_{\ell,g}^{(j)} – f_{\ell,g+1}^{(j)}\right| \leq E K.\] The final inequality employs the previously stated conditions, indicating a uniform bound \(K\) on the sum over \(g\), scaled by the summation of \(\varepsilon_j\).

Next, consider the convergence of the series \(\sum\limits_{j=0}^{\infty} \varepsilon_j f_{\ell,g}^{(j)}\). By Definition 2, this series converges uniformly. Therefore, as \(\ell \to \infty\), we have: \[\lim_{\ell \to \infty} b_{\ell,g} = \frac{1}{a} \lim_{\ell \to \infty} \sum\limits_{j=0}^{\infty} \varepsilon_j f_{\ell,g}^{(j)} = \frac{1}{a} \sum\limits_{j=0}^{\infty} \varepsilon_j \lim_{\ell \to \infty} f_{\ell,g}^{(j)} = 1.\] This final step concludes the proof, showing that the limit of \(b_{\ell,g}\) as \(\ell\) approaches infinity equals 1, based on the convergence properties of the involved series and matrix elements. ◻

Definition 3. Let \(P=\left(p_{\ell ,g} \right)\) and \(R=\left(r_{\ell ,g} \right)\) are two matrix. Then the new matrix \(S=\left(s_{\ell ,g} \right)=\left(p_{\ell ,g} r_{\ell ,g} \right)\) is known as term product of A and B.

Theorem 4. The elementwise product of two \(\gamma\)-matrices retains the \(\gamma\)-matrix properties.

Proof. Consider two \(\gamma\)-matrices, \(\mathbf{P}\) and \(\mathbf{R}\), whose elements \(p_{\ell,g}\) and \(r_{\ell,g}\) satisfy the \(\gamma\)-matrix conditions. Define \(S_{\ell,g} = p_{\ell,g} r_{\ell,g}\) as the elementwise product matrix. We need to show that \(\mathbf{S}\) is also a \(\gamma\)-matrix.

First, examine the difference between adjacent elements in the product matrix: \[S_{\ell, g} – S_{\ell, g+1} = p_{\ell, g}(r_{\ell, g} – r_{\ell, g+1}) + r_{\ell, g+1}(p_{\ell, g} – p_{\ell, g+1}).\] Using the triangle inequality, we can bound the sum of absolute differences: \[\sum\limits_{g=1}^{\infty} \left|S_{\ell, g} – S_{\ell, g+1}\right| \leq \sum\limits_{g=1}^{\infty} \left|p_{\ell, g}\right| \left|r_{\ell, g} – r_{\ell, g+1}\right| + \sum\limits_{g=1}^{\infty} \left|r_{\ell, g+1}\right| \left|p_{\ell, g} – p_{\ell, g+1}\right|.\] By the properties of \(\gamma\)-matrices, we know: \[\sum\limits_{g=1}^{\infty} \left|r_{\ell, g} – r_{\ell, g+1}\right| \leq D_2 \quad \text{and} \quad \sum\limits_{g=1}^{\infty} \left|p_{\ell, g} – p_{\ell, g+1}\right| \leq D_1,\] where \(D_1\) and \(D_2\) are constants, and \(\left|p_{\ell, g}\right|\) and \(\left|r_{\ell, g+1}\right|\) are bounded by some constants \(K_1\) and \(K_2\) respectively.

Thus, the inequality becomes: \[\sum\limits_{g=1}^{\infty} \left|S_{\ell, g} – S_{\ell, g+1}\right| \leq K_1 D_2 + K_2 D_1.\] This shows that the series of absolute differences for \(\mathbf{S}\) is uniformly bounded, satisfying one of the key conditions of a \(\gamma\)-matrix.

Furthermore, since \(p_{\ell, g}\) and \(r_{\ell, g}\) both converge to 1 as \(\ell \to \infty\), their product \(S_{\ell, g} = p_{\ell, g} r_{\ell, g}\) also converges to 1 as \(\ell \to \infty\).

Therefore, \(\mathbf{S}\) satisfies the conditions to be a \(\gamma\)-matrix. ◻

Theorem 5. Let \(f_{\ell g} = \sum\limits_{i=1}^\ell m_{ig}\) for all \(\ell, g \geq 1\). Then the matrix \(F = (f_{\ell g})\) is a \(\gamma_A\)-matrix if and only if the matrix \(M = (m_{\ell g})\) is an \(\alpha_A\)-matrix.

Proof. Assume \(F\) is a \(\gamma_A\)-matrix. According to Sahani and Jha [18], we have: \[\lim_{\ell \to \infty} \left(\sum\limits_{j=1}^\infty m_{jg}\right) = \lim_{\ell \to \infty} f_{\ell g} = 1.\tag{4}\] Also, the absolute value of each \(m_{\ell g}\) can be expressed as: \[|m_{\ell g}| = |f_{\ell g} – f_{\ell-1,g}|,\] which further satisfies: \[|m_{\ell g}| \leq |f_{\ell g}| + |f_{\ell-1,g}| \leq k_{\ell-1}(F) + k_{\ell}(F) < k_{\ell}(F).\] Summing over all \(\ell\), we have: \[\sum\limits_{\ell=1}^\infty |m_{\ell g}| = |f_{\ell g}| + \sum\limits_{\ell=2}^\infty |f_{\ell g} – f_{\ell-1,g}| < D(F),\] satisfying the conditions necessary for \(M\) to be an \(\alpha_A\)-matrix.

Conversely, suppose \(M\) is an \(\alpha_A\)-matrix. By definition, we know: \[m_{\ell g} = f_{\ell g} – f_{\ell-1,g} \quad (\ell, g \geq 1),\] and specifically for \(\ell = 1\): \[m_{1g} = f_{1g} \quad (g \geq 1).\] To prove \(F\) is a \(\gamma_A\)-matrix, consider: \[\sum\limits_{\ell=1}^\infty |m_{\ell g}| < D(M) \implies \sum\limits_{\ell=2}^\infty |f_{\ell g} – f_{\ell-1,g}| < D(F),\] and also, by employing [18]: \[|f_{\ell g}| = |m_{1g} + m_{2g} + \ldots + m_{\ell g}| \leq K_{\ell}(F).\] Thus, we conclude: \[\lim_{\ell \to \infty} f_{\ell g} = \sum\limits_{j=1}^\infty m_{jg} = 1,\] demonstrating that \(F\) is indeed a \(\gamma_A\)-matrix. ◻

Theorem 6. The product \(S = FM\) of a \(\gamma_A\)-matrix \(F\) and an \(\alpha_A\)-matrix \(M\) exists and is a \(\gamma_A\)-matrix.

Proof. Consider \(F = (f_{\ell i})\) as a \(\gamma_A\)-matrix. Assume the series \(\sum\limits V_j\) is convergent. Then the \(F\) transform of \(\sum\limits V_j\), given by \(\sum\limits_{i=1}^\infty f_{\ell i} V_i\), exists for all \(\ell\) and forms a sequence of bounded variation (according to [18] and preceding notes).

Select \(v_i = m_{ig}\) where \(m_{ig}\) are the elements of the \(\alpha_A\)-matrix \(M\). By the convergence properties and [18], it follows that \[\lim_{\ell \to \infty} \sum\limits_{i=1}^\infty f_{\ell i} m_{ig} = \sum\limits_{i=1}^\infty m_{ig} = 1\] for all \(g\). Also, since \(M\) is an \(\alpha_A\)-matrix, we have \[\sum\limits_{i=1}^\infty |m_{ig}| < D(M).\] If we define \(S = FM\), then for the entries of \(S\), we find \[|s_{\ell g}| = \left| \sum\limits_{i=1}^\infty f_{\ell i} m_{ig} \right| \leq \sum\limits_{i=1}^\infty |f_{\ell i}||m_{ig}| < K_\ell(F) D(M),\] which implies \[|s_{\ell g}| < K_\ell(S).\tag{5}\] Hence, the product matrix \(S = (s_{\ell g})\) exists for all \(\ell\) and \(g\), and \[\lim_{\ell \to \infty} s_{\ell g} = 1.\tag{6}\] Moreover, \[\label{GrindEQ__14_} \sum\limits_{\ell=2}^\infty |s_{\ell g} – s_{\ell-1,g}| = \sum\limits_{\ell=2}^\infty \left|\sum\limits_{i=1}^\infty (f_{\ell i} – f_{\ell-1,i}) m_{ig}\right| < D(H),\tag{7}\] where \(H\) is a matrix satisfying the required norm bounds. Therefore, the conditions of a \(\gamma_A\)-matrix are satisfied by \(S = FM\), concluding that \(S\) is a \(\gamma_A\)-matrix. ◻

Theorem 7. The product \(MF\) of an \(\alpha_A\)-matrix \(M\) and a \(\gamma_A\)-matrix \(F\) may not exist.

Proof. Let us define the matrices \(M\) and \(F\) as follows: \[m_{\ell g} = \begin{cases} 1 & \text{for } \ell = 1, g = 1 \\ 0 & \text{for } \ell > 1 \end{cases}\] and \[\label{GrindEQ__15_} f_{\ell g} = 1 \quad \forall \, \ell, g \geq 1.\] These matrices \(M\) and \(F\), as defined in (7) and 8, are respectively \(\alpha_A\)-matrices and \(\gamma_A\)-matrices.

Thus, the product \((FM)_{\ell g} = (F)_{\ell g}\) exists and is the \(\gamma_A\)-matrix \(F\) as given in equation 8. However, the product \((MF)_{\ell g} = \left(\sum\limits_{i=1}^{\infty} m_{\ell i} f_{ig}\right)\) becomes: \[= \sum\limits_{i=1}^{\infty} (1+1+\ldots)_{\ell g},\] which does not exist. ◻

Theorem 8. The product matrix L=FM exists and is a \(\gamma _{A}\)-matrix for every \(\gamma _{A}\)-matrix F iff M is an \(\alpha _{A}\)-matrix MF is an \(\alpha _{A}\)-matrix.

Proof. we consider a \(\gamma _{A}\)-matrix F which is defined as \[\label{GrindEQ__16_} f_{\ell g} \left. \begin{array}{cc} {=} & {1\, for\, g\le \ell } \\ {=} & {0\, for\, g>\ell } \end{array}\right\}\tag{9}\] Then the product matrix \(S=\left(FM\right)\) is \[\label{GrindEQ__17_} s_{\ell g} =\sum\limits _{i=1}^{\infty }f_{\ell i} m_{ig} =\sum\limits _{i=1}^{\infty }m_{ig}\tag{10}\] Hence, by theorem 1, the matrix \(S=\left(s_{\ell g} \right)\) in equation (10) is \(\gamma _{A}\)-matrix, only if F is an \(\alpha _{A}\)-matrix. ◻

Theorem 9. The product of two \(\alpha_A\)-matrices is an \(\alpha_A\)-matrix.

Proof. Let \(P\) and \(Q\) be two \(\alpha_A\)-matrices. Define a new matrix \(F = (f_{\ell g})\) as follows: \[\label{GrindEQ__18_} f_{\ell g} = p_{1g} + p_{2g} + \ldots + p_{\ell g} \quad (\ell, g \geq 1).\tag{11}\] By Theorem 1, \((f_{\ell g})\) is a \(\gamma_A\)-matrix, and by Definition 3, the product \((S)_{\ell g} = (FQ)_{\ell g}\) is a \(\gamma_A\)-matrix.

Now, define: \[\label{GrindEQ__19_} e_{\ell g} = s_{\ell g} – s_{\ell – 1, g} \quad (\ell > 1, g \geq 1).\tag{12}\] Then, \(e_{\ell g} = s_{\ell g}\). Hence, \(E = (e_{\ell g})\) is a \(\gamma_A\)-matrix, and \[e_{\ell g} = \sum\limits_{i=1}^{\infty} f_{\ell i} q_{ig} – \sum\limits_{j=1}^{\infty} f_{\ell – 1, i} = \sum\limits_{i=1}^{\infty} (f_{\ell i} – f_{\ell – 1, i}) \cdot q_{ig} = \sum\limits_{j=1}^{\infty} p_{\ell i} \cdot q_{ig}.\] Hence, \((E)_{\ell g} = (PQ)_{\ell g}\) by our assumption 11 of the matrix \(F\). ◻

Theorem 10. The product of two \(\gamma_A\)-matrices is not necessarily a \(\gamma_A\)-matrix.

Proof. Consider a \(\gamma_A\)-matrix \(F = (F_{\ell g})\), as defined in Eq. (7), and another \(\gamma_A\)-matrix \(S = (S_{\ell g})\) where: \[\label{GrindEQ__20_} S_{\ell g} = 1 \quad \forall \, \ell, g \geq 1.\tag{13}\] Define the product matrix \(T = FS\) such that its elements are given by: \[t_{\ell g} = \sum\limits_{i=1}^\infty f_{\ell i} s_{ig}.\] Given that \(S_{ig} = 1\) for all \(i\) and \(g\), the elements of \(T\) simplify to: \[t_{\ell g} = \sum\limits_{i=1}^\infty f_{\ell i} = \ell,\] assuming \(F_{\ell i} = 1\) for all \(i\) up to \(\ell\). However, this leads to: \[\lim_{\ell \to \infty} t_{\ell g} = \infty,\] which does not conform to the definition of a \(\gamma_A\)-matrix, as it should possess a finite limit as \(\ell\) approaches infinity. Hence, we conclude that \(T\) is not a \(\gamma_A\)-matrix. ◻

3. Conclusion

The Silverman-Treplitz theorem establishes both necessary and sufficient conditions for matrices. It ensures the correct summation of every convergent series. Similar properties akin to the Silverman-Treplitz theorem were demonstrated by Carmichael, Perron, and Bosanquet’s theorem.

In this note, we establish that matrix structures describing sequence-to-series operations with sustained convergence form a Banach algebra under a specific norm.

The known methods for summing divergent series are particular cases of sequence transformations using T-matrices or series transformations using \(\gamma\)-matrices. The utilization of \(\gamma\)-matrices offers several advantages:

  1. \(\gamma\)-matrices are defined by two conditions, whereas T-matrices are defined by three conditions.

  2. \(\gamma\)-matrices operate directly on the terms of the series, while T-matrices require the formation of partial sums.

  3. \(\gamma\)-matrices, as demonstrated by Dienes, possess greater generality, as each T-matrix corresponds to an equivalent \(\gamma\)-matrix, while there exist \(\gamma\)-matrices without an equivalent T-matrix.

Understanding why infinite matrices were among the earliest tools considered in function space operator studies presents a challenge. The initial spaces examined consisted of sets of infinite sequences of numbers, naturally viewed as generalizations of n-tuples. Since finite matrices correspond to natural linear operators on finite-dimensional spaces, it is reasonable to conceive of infinite matrices as analogous extensions, serving as natural linear operators defined on sequence spaces.

Conflict of Interests

The authors declare no conflict of interest.

Data Availability

“All data required for this research is included within this paper”.

Funding Information

“No funding is available for this research”.

References:

  1. Ramanujan, M. S. (1956). Existence and classification of products of summability matrices. Proceedings of the Indian Academy of Science Section A, 44, 171-184.

  2. Vermes, P. (1949). Series to series transformation and analytic continuation by matrix methods. American Journal of Mathematics, 71, 541-562.

  3. Vermes, P. (1947). On γ-matrix and their application to the binomial series. Proceedings of the Edinburgh Mathematical Society, 8, 1-13.

  4. Aasma, A. (1994). Matrix transformations of summability field of normal regular matrix methods. Tallinna Tehnikaülikool, Toimetised, Matem. Füüs, 2, 3-10.

  5. Boos, J. (2000). Classical and modern methods in summability. Oxford University Press.

  6. Sahani, S. K., & Mishra, L. N. (2020). A certain study on Nörlund summability of series. Journal of Linear and Topological Algebra, 9(4), 311-321.

  7. Bataineh, A. H. A. (1999). Matrix transformations in sequence spaces (Thesis). Department of Mathematics, Aligarh Muslim University, Aligarh.

  8. Sahani, S. K., et al. (2022). On Nörlund summability of double Fourier series. Open Journal of Mathematical Sciences, 6(1), 99-107.

  9. Sahani, S. K., & Mishra, V. N. (2023). Degree of approximation of signals by Nörlund summability of double Fourier series. Mathematical Sciences and Applications E-Notes, 11(2), 80-88.

  10. Aasma, A. (2011). Factorable matrix transforms of summability domains of Cesàro matrices. International Journal of Contemporary Mathematical Sciences, 6(44), 2201-2206.

  11. Aasma, A., Dutta, H., & Natarajan, P. N. (2017). Matrix transformation of summability and absolute summability domains; Peyerimhoff’s method. In An introductory course in summability theory (pp. 113-130). doi: 10.1002/9781119397718/ch6

  12. Sarigöl, M. A. (2011). Matrix transformations on fields of absolute weighted mean summability. Studia Scientiarum Mathematicarum Hungarica, 48(3), 331-341.

  13. Jarrah, A. (2003). Ordinary absolute and strong summability and matrix transformations. Filomat, 17(17), 59-78. Malkowsky, E. (2017). On strong summability and convergence. Filomat, 31(11), 3095-3123.

  14. Li, J. (2000). Matrix transformations from absolutely convergent series to convergent sequences as general weighted mean summability methods. International Journal of Mathematics and Mathematical Sciences, 24(8), 533-538.

  15. Sarigöl, M. A. (2016). Spaces of series summable by Absolute Cesàro and Matrix operation. Communications in Mathematics and Application, 7(1), 11-22.

  16. Sahani, S. K., et al. (2022). On certain series to series transformation and analytic continuations by matrix methods. Nepal Journal of Mathematical Sciences, 3(1), 75-80.

  17. Sahani, S. K., Mishra, V. N., & Pahari, N. P. (2020). On the degree of approximation of a function by Nörlund means of its Fourier Laguerre series. Nepal Journal of Mathematical Sciences, 1, 65-70.

  18. Sahani, S. K., Mishra, V. N., & Pahari, N. P. (2021). Some problems in approximations of function (signals) in matrix summability of Legendre series.Nepal Journal of Mathematical Sciences, 2(1), 43-50.

  19. Ikard, T. E. (1970). Summability method sequence spaces and applications (Thesis). Faculty of Graduate College of the Oklahoma State University, May 1970.

  20. Sahani, S. K., & Jha, D. (2021). A certain studies on degree of approximation of functions by matrix transformation. The Mathematics Education, LV(2), 21-33.

  21. Sahani, S. K., Thakur, A., & Sahu, S. K. (2022). On a new application of summation of Jacobi series by B-method. Journal of Algebraic Statistics, 13(3), 3261-3268.

  22. Sahani, S. K., & Mishra, L. N. (2021). Degree of approximation of signals by Nörlund summability of derived Fourier series. The Nepali Math. Sc. Report, 38(2), 13-19.

  23. Vermes, P. (1946). Product of a T-matrix and a γ-matrix. Journal of the London Mathematical Society, 21, 129-134.