Search for Articles:

Contents

\(s\)-convex set and \(s\)-convex functions on Heisenberg group

Chuanyang Li1, Peibiao Zhao1
1School of Mathematics and Statistics, Nanjing University of Science and Technology, Nanjing, China
Copyright © Chuanyang Li, Peibiao Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we give the definitions of \(s\)-convex set and \(s\)-convex function on Heisenberg group. And some inequalities of Jensen’s type for this class of mappings are pointed out.

Keywords: s-convex set, s-convex function, Heisenberg group, Jensen’s discrete inequality

1. Introduction and main results

In convex analysis, convex sets and convex functions are core fundamental concepts, widely applied in optimization theory, partial differential equations, geometric analysis, and other fields. The research on convexity and generalized convexity is one of the important subjects in mathematical programming, numerous generalizations of convex functions have been proved useful for developing suitable optimization problems (see [13]). Let \(\mathbb{X}\) be a real linear space (e.g., \(\mathbb{R}^n\)). A set \(C\subset \mathbb{X}\) is called a convex set if for any \(x_1,x_2\in C\) and any \(\lambda\in[0,1]\), the following holds: \[\begin{aligned} \lambda x_1+(1-\lambda)x_2\in C. \end{aligned}\]

The expression \(\lambda x_1+(1-\lambda)x_2\) is called the convex combination of \(x_1\) and \(x_2\), which geometrically represents the line segment connecting the two points. Therefore, the intuitive meaning of a convex set is: the line segment connecting any two points in the set is completely contained within the set. In addition, let \(C\subset\mathbb{X}\) be a convex set. A function \(f:C\to\mathbb{R}\) is called a convex function if for any \(x_1,x_2\in C\) and any \(\lambda\in[0,1]\), then \[\begin{aligned} f\left(\lambda x_1+(1-\lambda)x_2\right)\le\lambda f(x_1)+(1-\lambda)f(x_2). \end{aligned}\]

If the inequality holds strictly for any \(x_1\neq x_2\) and \(\lambda\in(0,1)\), then \(f\) is called a strictly convex function; if \(-f\) is a convex function, then \(f\) is called a concave function.

\(s\)-convex functions defined on a space of real numbers was introduced by Orlicz in [4] and was used in the theory of Orlicz spaces. \(s\)-Orlicz convex sets and \(s\)-Orlicz convex mappings in linear spaces were introduced by Dragomir in [5].

Definition 1. [5] Let \(\mathbb{X}\) be a linear space and \(s\in(0,\infty).\) The set \(K\subseteq \mathbb{X}\) will be called \(s\)-Orlicz convex in \(\mathbb{X}\) if the following condition is true: \(x,y\in K\) and \(\alpha,\beta>0\) with \(\alpha^s+\beta^s=1\) imply \(\alpha x+\beta y\in K\).

Definition 2. [5] The mapping \(f:K\rightarrow\mathbb{R}\) will be called \(s\)-Orlicz convex on \(K\) if for all \(x,y\in K\) and \(\alpha,\beta\geq0\) with \(\alpha^s+\beta^s=1\) one has the inequality \[\begin{aligned} f(\alpha x+\beta y)\leq \alpha^sf(x)+\beta^sf(y). \end{aligned}\]

Similarly, in this paper we introduce the class of \(s\)-convex functions defined on \(s\)-convex sets in Heisenberg group \(\mathbb{H}^n\). Some discrete inequalities of Jensen’s type are also obtained. In fact, we discuss the \(s\)-convex subset, the \(s\)-convex function and related conclusions on homogeneous sub semigroup of Heisenberg group.

Definition 3. Let \(s\in(0,\infty)\), the \(\Omega\) \(\subseteq\) \(\mathbb{H}^n\) is called \(s\)-convex set in \(\mathbb{H}^n\) if the following condition is true: \(x,y\in\Omega\) and \(\alpha,\beta \geq 0\) with \(\alpha^s+\beta^s=1\) imply \(\delta_\alpha (x) \cdot \delta_\beta (y)\in \Omega\).

Example 1. Sub-semigroup \(\Omega=\{(z,t),z\in\mathbb{R}^n,t=\frac{1}{4}|z|^2\}\subset \mathbb{H}^n\), where \(|z|^2=x_1^2+x_2^2+\cdots +x_n^2\) is the modulus square of a real vector.

Definition 4. Let \(s\in(0,\infty)\), the \(\Omega\) \(\subseteq\) \(\mathbb{H}^n\) be a \(s\)-convex set. The mapping \(f:\Omega\rightarrow \mathbb{R}\) is called \(s\)-convex on \(\Omega\) if for all \(x,y\in \Omega\) and \(\alpha, \beta \geq 0\) with \(\alpha^s+\beta^s=1\), one has the inequality \[\begin{aligned} f(\delta_\alpha (x) \cdot \delta_\beta (y))\leq \alpha^s f(x)+\beta^sf(y). \end{aligned}\]

Here, \(\cdot\) is the group law and \(\delta_\alpha (x)\) expresses the corresponding homogeneous structure on Heisenberg group which is provided by the parabolic dilations. See the §2 for details.

Remark 1. In Definition 3, if \(s=1\), we call \(\Omega\) is convex in \(\mathbb{H}^n\), and in Definition 4, if \(s=1\), \(f\) is called the convex function on \(\Omega\) in \(\mathbb{H}^n\).

From the Definition 3, it’s not difficult to obtain the following properties for special cases.

Proposition 1. Any convex subset \(\Omega\subseteq\mathbb{H}^n\) with the following properties:

(1) \(x,y\in \Omega\) imply \(x\cdot y\in \Omega\).

(2) \(x\in\Omega, \alpha\geq 0\) imply \(\delta_\alpha(x)\in\Omega\).

Theorem 1. Let \(s\in(0,\infty)\), for a given subset \(\Omega\subseteq\mathbb{H}^n\), the following statements are equivalent:

(1)  \(\Omega\) is \(s\) convex.

(2)  For every \(x_1, \cdots, x_n \in \Omega\) and \(\alpha_1, \cdots, \alpha_n \geq 0\) with \(\alpha_1^s+\cdots+\alpha_n^s=1\), we have that \(\delta_{\alpha_1}(x_1)\cdot~\ldots~\cdot\delta_{\alpha_n}(x_n)\in\Omega.\)

Theorem 2. Let \(s\in(0,\infty)\), and a nonempty subset \(\Omega\subseteq \mathbb{H}^n\). Denote \[co_s(\Omega)=\bigg\{{\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i):\alpha_i\geq 0, \sum\limits_{i=1}^n}\alpha_i^s=1, x_i\in\Omega, n\geq2\bigg\}.\]

Then \(co_s(\Omega)\) is \(s\)-convex and will be called the \(s\)-convex hull of \(\Omega\). Here \(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)=\delta_{\alpha_1}(x_1)\cdot \delta_{\alpha_2}(x_2)\cdot\ldots\cdot\delta_{\alpha_n}(x_n)\).

Proposition 2. Let \(f:\Omega\rightarrow \mathbb{R}\) be a \(s\)-convex mapping on the \(s\)-convex subset and \(\xi\in\mathbb{R}\) so that \(f^\varepsilon(\xi)=\{x\in\Omega:f(x)\leq\xi\}\) is nonempty. Then \(f^\varepsilon(\xi)\) is a \(s\)-convex subset of \(\Omega\).

Theorem 3. Let \(s\)-convex set \(\Omega\subseteq\mathbb{H}^n\) and \(f:\Omega\rightarrow\mathbb{R}\) a mapping defined on \(\Omega\). The following statement are equivalent:

(1)  \(f\) is \(s\)-convex function on \(\Omega\).

(2)  For every \(\alpha_i\geq0\) such that \(\sum\limits_{i=1}^n\alpha_i^s=1\), one has the inequality

\[\begin{aligned} f\left(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\right)\leq\sum\limits_{i=1}^n\alpha_i^sf(x_i), \end{aligned}\] for all \(n\geq2\).

Theorem 4. Let \(f:\Omega\subseteq\mathbb{H}^n\rightarrow\mathbb{R}\) be a \(s\)-convex map on the \(s\)-convex set \(\Omega\) and \(\alpha_i\geq0\) with \(\sum\limits_{i=1}^n\alpha_i^s=1\). Let \(x_{ij}\in\Omega, 1\leq i,j\leq n\). Then we have the inequalities \[\begin{aligned} f\left(\prod_{i=1}^n\cdot\delta_{\alpha_i\alpha_j}(x_{ij})\right)\leq \min\{A,B\}\leq \max\{A,B\}\leq\sum\limits_{i,j=1}^n\alpha_i^s\alpha_j^sf(x_{ij}), \end{aligned}\] where \(A=\sum\limits_{i=1}^n\alpha_i^sf\left(\prod_{j=1}^n\cdot\delta_{\alpha_j}(x_{ij})\right)\) and \(B=\sum\limits_{j=1}^n\alpha_j^sf\left(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_{ij})\right)\).

2. Preliminary

In this subsection, we give a brief review of some relevant notions and terminologies.

2.1. Heisenberg group \(\mathbb{H}^n\)

Here, we list some relevant knowledges which required for this article in the Heisenberg group \(\mathbb{H}^n\) (for example, [6]).

The Heisenberg group \(\mathbb{H}^n\), is the Lie group (\(\mathbb{R}^{2n+1},\cdot\)), where we consider in \(\mathbb{R}^{2n+1}\equiv \mathbb{C}^n\times\mathbb{R}\). In particular, if one writes point as \(x=(z,t)\) with \(z\in \mathbb{C}^n\) and \(t\in\mathbb{R}\), the group law \(\cdot\) should be stated in a conventional form such as \[\begin{aligned} x\cdot y=(z,t) \cdot (w,s)=(z+w, t+s+\frac{1}{2} \text{ Im }\langle z,\overline{w}\rangle), \end{aligned}\] with the inner product explicitly defined as \(\langle z,\overline{w}\rangle=\sum\limits_{i=1}^nz_i\overline{w_i}\). Here \(x=(z_1,\cdots,z_n,t)\) and \(y=(w_1,\cdots,w_n,s)\).

The identity element is \(e=(0^{\mathbb{C}^n},0)\in\mathbb{H}^n\), the inverse element of \(x^{-1}=(-z_1,\cdots,-z_n,-t)\), and the center of the group is \(c=\{(z_1,\cdots,z_n,t)\}\in\mathbb{H}^n:z_1,\cdots,z_n=0\).

Moreover, let any point \(x=(z,t)\in\Omega\subset \mathbb{H}^n\) and any \(y_1=(z^1,t^1)\in\Omega,y_2=(z^2,t^2)\in\Omega\), there are

(1) \(z^1+z^2\in z\),

(2) \(t^1+t^2+\frac{1}{2} {\text{ Im }}\langle z^1,\overline{z^2}\rangle\in t\),

(3) any \(x\in\Omega\), \(\alpha\geq0\), \(\delta_\alpha (x)\in \Omega\).

Then we call \(\Omega\) is a sub-semigroup of \(\mathbb{H}^n\) when \(\Omega\) satisfies (1) and (2). If \(\Omega\) satisfies (1), (2) and (3), then we call \(\Omega\) is a Homogeneous sub-semigroup of \(\mathbb{H}^n\).

The left invariant translates of the canonical basis at the identity are given by the vector fields \[\begin{aligned} X_i=\frac{\partial}{\partial x_i}-\frac{1}{2}x_{i+n}\frac{\partial}{\partial t},\quad X_{i+n}=\frac{\partial}{\partial x_{i+n}}+\frac{1}{2}x_{i}\frac{\partial}{\partial t},\quad X_{2n+1}=\frac{\partial}{\partial t}, \end{aligned}\] where \(i=1,\cdots,n\). The first \(2n\) vector fields span the horizontal distribution in \(\mathbb{H}^n\). Left translation by \(x\in\mathbb{H}^n\) is the mapping \(L_x:L_x(\tilde{x})=x\cdot\tilde{x}\).

For any \(\lambda>0\), the mapping \(\delta_\lambda:\delta_\lambda(z_1,\cdots,z_n,t)=(\lambda z_1,\cdots,\lambda z_n,\lambda^2t)\) is called dilation. From the group law and dilation rule, we easily obtain \[\begin{aligned} \delta_\lambda(x\cdot y)=\delta_\lambda(x)\cdot\delta_\lambda(y). \end{aligned}\] In addition, we give a convention, \(\delta_\lambda\cdot\delta_\mu=\delta_{\lambda\mu}\).

3. The proof of theorem and proposition

Proof of Proposition 1. Let \(x=(z,t),y=(w,s)\in\Omega\). We have \[\delta_\alpha(x)\cdot\delta_\beta(y)=(\alpha z+\beta w,\;\alpha^2 t+\beta^2 s+2\operatorname{Im}\langle\alpha z,\beta \overline{w} \rangle),\] and \[x\cdot y=\left(z+w,\;t+s+2\operatorname{Im}\langle z,\overline{w}\rangle\right).\]

If any \(z,w\in\mathbb{C}^n\), \(\alpha z+\beta w=z+w\), and any \(t,s\in\mathbb{R}\), \(\alpha^2 t+\beta^2s+2\alpha\beta\operatorname{Im}\langle z,\overline{w}\rangle=t+s+2\operatorname{Im}\langle z,\overline{w}\rangle\;\).

From the complex component equation, since it must hold for all \(z,w\in\mathbb{C}^n\), we require \(\alpha=1,\beta=1\). Substitute \(\alpha=1,\beta=1\) into the condition \(\alpha^s+\beta^s=1\Rightarrow1^s+1^s=2=1\). This is a contradiction for any \(s\in(0,\infty)\). Thus there is no pair \((\alpha,\beta)\) satisfying both \(\alpha^s+\beta^s=1\) and \(\delta_\alpha(x)\cdot\delta_\beta(y)=x\cdot y\) for all \(x,y\in\Omega\). Thus \(x\cdot y \in\Omega\) does not hold in general for a \(s\)-convex set \(\Omega\subseteq\mathbb{H}^n\).

But, if taking \(\alpha=1\) and \(\beta=0\) (or \(\alpha=0\) and \(\beta=1\)) in Definition 3 can directly lead to the conclusion (1). Or \(x=y=0\) also true.

In addition, for any \(x\in\Omega, \alpha=1,\beta=0\), we can directly obtain the conclusion (2) by Definition 3. ◻

Proof of Theorem 1. (1)\(\rightarrow\)(2) We will prove by induction over \(n\in\mathrm{N}\) and \(n\geq 2\). For \(n=2\), the argument follows by Definition 3. Suppose that the statement holds for all \(2\leq k\leq n-1\). Let \(x_1, \cdots, x_n \in \Omega\) and \(\alpha_1, \cdots, \alpha_n \geq 0\) with \(\alpha_1^s+\cdots+\alpha_n^s=1\). If \(\alpha_1=\cdots=\alpha_{n-1}=0\), then \(\alpha_n=1\) and thus \(\delta_{\alpha_1}(x_1)\cdot\ldots\cdot\delta_{\alpha_n}(x_n)\in\Omega.\)

Assume that \(\alpha_1^s+\cdots+\alpha_{n-1}^s>0\), and consider \[\begin{aligned} \beta_1=\frac{\alpha_1}{(\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}}, \ \ \ \cdots ,\ \ \ \beta_{n-1}=\frac{\alpha_{n-1}}{(\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}}. \end{aligned}\]

Then \(\beta_1^s+\cdots+\beta_{n-1}^s=1\).

In addition, let \(\delta_{\alpha_1}(x_1)\cdot\delta_{\alpha_2}(x_2)\cdot\ldots\cdot\delta_{\alpha_{n-1}}(x_{n-1})=\delta_\beta(x)\). Let\[x:=\prod_{i=1}^{n-1}\cdot\delta_{\beta_i}(x_i)\in\Omega_i,\ \ \ \ \beta:=\left(\sum\limits_{i=1}^{n-1}\alpha_i^s\right)^{\frac{1}{s}}.\]

Then using \(\delta_\beta\) as an automorphism, \[\prod_{i=1}^{n-1}\cdot\delta_{\beta_i}(x_i)=\delta_\beta\left(\prod_{i=1}^{n-1}\cdot\delta_{\beta_i}(x_i)\right)=\delta_\beta(x),\] and therefore \[\prod_{i=1}^{n-1}\cdot\delta_{\beta_i}(x_i)=\delta_\beta(x)\cdot\delta_{\alpha_n}(x_n).\]

Now apply \(s\)-convexity of \(f\) to the pair \((x, x_n)\) with coefficients \((\beta, \alpha_n)\) satisfying \(\beta^s+\alpha_n^s=1\), it follows that \(\delta_\beta (x)\cdot\delta_{\alpha_n}(x_n)\in\Omega\).

(2)\(\rightarrow\)(1) This is obvious by Definition 3. Thus Theorem 1 is proved. ◻

Proof of Theorem 2. Let \(x,y\in co_s(\Omega)\), then \(x, y\) can be written by

\[\begin{aligned} x=\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\quad \text{with} \quad \alpha_i \geq 0,\quad x_i \in \Omega,\quad \text{and} \quad \sum\limits_{i=1}^n \alpha_i^s=1, n\geq 2, \end{aligned}\] and \[\begin{aligned} y=\prod_{j=1}^m\cdot\delta_{\beta_j}(y_j)\quad \text{with}\quad \beta_j \geq 0,\quad y_j \in \Omega,\quad \text{and}\quad\sum\limits_{j=1}^m \beta_j^s=1, m\geq 2. \end{aligned}\]

Consider \(\alpha, \beta\geq 0\) with \(\alpha^s+\beta^s=1\). Then

\[\begin{aligned} \delta_\alpha(x)\cdot\delta_\beta(y)=\delta_\alpha\left(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\right)\cdot \left(\prod_{j=1}^m\cdot\delta_{\beta_j}(y_j)\right)=\prod_{k=1}^{m+n}\cdot\delta_{\gamma_k}(z_k), \end{aligned}\] where \[\begin{aligned} \gamma_1=\alpha\alpha_1,\ \ \cdots,\ \ \gamma_n=\alpha\alpha_n,\ \ \gamma_{n+1}=\beta\beta_1,\ \ \cdots,\ \ \gamma_{n+m}=\beta\beta_m, \end{aligned}\] and \[\begin{aligned} z_1=x_1,\ \ \cdots,\ \ z_n=x_n,\ \ z_{n+1}=y_1,\ \ \cdots,\ \ z_{n+m}=y_m. \end{aligned}\]

We have \[\begin{aligned} \sum\limits_{k=1}^{n+m}\gamma_k^s=\alpha^s(\alpha_1^s+\cdots+\alpha_n^s)+\beta^s(\beta_1^s+\cdots+\beta_m^s)= \alpha^s+\beta^s=1. \end{aligned}\]

Which shows that \(\delta_\alpha(x)\cdot\delta_\beta(y)\in co_s(\Omega)\) and the statement is proved. ◻

Proof of Proposition 2. Let \(x,y\in f^\varepsilon(\xi)\) and \(\alpha, \beta \geq 0\) with \(\alpha^s+\beta^s=1\), then \(f(x)\leq\xi\) and \(f(y)\leq\xi\). Which imply that \(\alpha^sf(x)\leq\alpha^s\xi\) and \(\beta^sf(y)\leq\beta^s\xi\). Thus \(f(\delta_\alpha(x)\cdot\delta_\beta(y))\leq\alpha^sf(x)+\beta^sf(y)\leq(\alpha^s+\beta^s)\xi=\xi\), which shows that \(f^\varepsilon(\xi)\) is a \(s\)-convex subset of \(\Omega\). ◻

Proof of Theorem 3. (1)\(\rightarrow\)(2) The fact that \(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\in\Omega\) for all \(n\geq2\) follows from Theorem 1, we will use induction over \(n\geq2\).

If \(n=2\), the inequality is obvious by the Definition 4.

Suppose that the above inequality is valid for all \(2\leq k\leq n-1\). Let \(x_1,\cdots,x_n\in\Omega\) and \(\alpha_1,\cdots,\alpha_n\geq0\) with \(\alpha_1^s+\cdots+\alpha_n^s=1\). If \(\alpha_1=\cdots=\alpha_{n-1}=0\) then \(\alpha_n=1\) and the inequality is obvious.

Assume that \(\alpha_1^s+\cdots+\alpha_{n-1}^s>0\) and put \[\begin{aligned} \beta_1=\frac{\alpha_1}{(\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}}, \ \ \ \cdots ,\ \ \ \beta_{n-1}=\frac{\alpha_{n-1}}{(\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}}. \end{aligned}\]

Then \(\beta_1^s+\cdots+\beta_{n-1}^s=1\). And obviously \(\delta_{\beta_1}(x_1)\cdot\ldots\cdot\delta_{\beta_{n-1}}(x_{n-1})\in\Omega\), using the inductive hypothesis we also can state \[\begin{aligned} f\left(\delta_{\beta_1}(x_1)\cdot\ldots\cdot\delta_{\beta_{n-1}}(x_{n-1})\right)\leq \beta_1^sf(x_1)+\cdots+\beta_{n-1}^sf(x_{n-1}). \end{aligned}\]

Now, let \(\delta_{\alpha_1}(x_1)\cdot\delta_{\alpha_2}(x_2)\cdot\ldots\cdot\delta_{\alpha_{n-1}}(x_{n-1})=\delta_\beta(x)\). Let\[x:=\prod_{i=1}^{n-1}\cdot\delta_{\beta_i}(x_i)\in\Omega_i,\ \ \ \ \beta:=\left(\sum\limits_{i=1}^{n-1}\alpha_i^s\right)^{\frac{1}{s}}.\]

Then using \(\delta_\beta\) as an automorphism, we observe that \[\begin{aligned} \label{eq31} f\left(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\right)=&f\left((\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}\frac{\delta_ {\alpha_1}(x_1)\cdot\ldots\cdot\delta_{\alpha_{n-1}}(x_{n-1})}{(\alpha_1^s+\cdots+\alpha_ {n-1}^s)^{\frac{1}{s}}}\cdot\delta_{\alpha_n}(x_n)\right)\\ \nonumber\leq &\beta^sf\left(\frac{\delta_{\alpha_1}(x_1)\cdot\ldots\cdot\delta_{\alpha_{n-1}}(x_{n-1})} {(\alpha_1^s+\cdots+\alpha_{n-1}^s)^{\frac{1}{s}}}\right)+\alpha_n^sf(x_n). \end{aligned} \tag{1}\] Note that this last inequality was obtained by Definition 4 in \(\beta\) and \(\alpha_n\) as \(\beta^s+\alpha_n^s=\alpha_1^s+\cdots+\alpha_{n-1}^s+\alpha_n^s=1\).

On the other hand \[\begin{aligned} f\left(\frac{\delta_{\alpha_1}(x_1)\cdot\ldots\cdot\delta_{\alpha_{n-1}}(x_{n-1})}{(\alpha_1^s+ \ldots+\alpha_{n-1})^{\frac{1}{s}}}\right)=&f\left(\delta_{\beta_1}(x_1)\cdot\ldots\cdot \delta_{\beta_{n-1}}(x_{n-1})\right)\\ \leq&\beta_1^sf(x_1)+\cdots+\beta_{n-1}^sf(x_{n-1})\\ =&\frac{\alpha_1^sf(x_1)+\cdots+\alpha_{n-1}^sf(x_{n-1})} {\alpha_1^s+\cdots+\alpha_{n-1}^s}. \end{aligned}\]

Using the inequality (1), we get \[\begin{aligned} f\left(\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\right)\leq&(\alpha_1^s+\cdots+\alpha_{n-1}^s) \frac{\alpha_1^sf(x_1)+\cdots+\alpha_{n-1}^sf(x_{n-1})}{\alpha_1^s+\cdots+\alpha_{n-1}^s}+\alpha _n^sf(x_n)\\ =&\sum\limits_{i=1}^s\alpha_i^sf(x_i). \end{aligned}\]

(2)\(\rightarrow\)(1) This is obvious by Definition 4. Therefore, the proof of Theorem 3 is completed. ◻

Corollary 1. Let \(f:\Omega\subseteq\mathbb{H}^n\rightarrow\mathbb{R}\) be a \(s\)-convex map and \(P_i\geq0\) with \(P_n^{(s)}=\sum\limits_{i=1}^nP_i^s>0\). Then for all \(x_i\in\Omega\), one has the inequality \[\begin{aligned} f\left(\frac{1}{[P_n^{(s)}]^{\frac{1}{s}}} \prod_{i=1}^n\cdot\delta_{P_i}(x_i)\right)\leq\frac{1}{P_n^{s}} \sum\limits_{i=1}^nP_i^sf(x_i). \end{aligned}\]

The proof is obvious by the above inequality of Theorem 3 choosing \(\alpha_i=\frac{P_i}{(P_n^{(s)})^{\frac{1}{s}}}\Rightarrow \sum\limits_{i=1}^n\alpha_i^s=1\). then Theorem 3 should yield the clean statement \[f\left(\frac{1}{[P_n^{(s)}]^{\frac{1}{s}}}\prod_{i=1}^n\cdot\delta_{\alpha_i}(x_i)\right)\leq\sum\limits_{i=1}^n\alpha_i^sf(x_i)=\frac{1}{P_n^{s}}\sum\limits_{i=1}^nP_i^sf(x_i).\]

Corollary 2. With the above assumptions for \(f\) and \(x_i\), one has the inequality \[\begin{aligned} f\left(\frac{x_1\cdot\ldots\cdot x_n}{n^{\frac{1}{s}}}\right)\leq\frac{f(x_1)+\cdots+f(x_n)}{n}. \end{aligned}\]

Similarly, taking \(\alpha_i=n^{-1/s}\) to obtain \[f(\delta_{{-1/s}}(x_1)\cdot\ldots\cdot\delta_{{{-1/s}}}(x_n))\leq\frac{1}{n}\sum\limits_{i=1}^nf(x_i).\]

Corollary 3. Let \(f,\Omega,x_i\) be as above and \(q_i\geq 0\) with \(Q_n=\sum\limits_{i=1}^sq_i>0\). Then one has the inequality \[\begin{aligned} f\left(\frac{\prod_{i=1}^n\cdot\delta_{q_i^{\frac{1}{s}}}(x_i)}{Q_n^{\frac{1}{s}}}\right) \leq\frac{1}{Q_n}\sum\limits_{i=1}^nq_if(x_i). \end{aligned}\]

The proof is obvious by the Theorem 3, choosing \(\alpha_i=\left(\frac{q_i}{Q_n}\right)^{\frac{1}{s}}\Rightarrow\sum\limits_{i=1}^n\alpha_i^s=1.\) Then \[\begin{aligned} f\left(\frac{\prod_{i=1}^n\cdot\delta_{q_i^{\frac{1}{s}}}(x_i)}{Q_n^{\frac{1}{s}}}\right) \leq\sum\limits_{i=1}^n\alpha_i^sf(x_i)=\frac{1}{Q_n}\sum\limits_{i=1}^nq_if(x_i). \end{aligned}\]

Proof of Theorem 4. Fix \(i\in\{1,\cdots,n\}\). Thus by Theorem 3, we can state \[\begin{aligned} f\left(\prod_{j=1}^n\cdot\delta_{\alpha_j}(x_{ij})\right)\leq\sum\limits_{j=1}^n\alpha_j^sf(x_{ij}). \end{aligned}\]

Now, by multiplying by \(\alpha_i^s\geq0\) we have \[\begin{aligned} \alpha_i^sf\left(\prod_{j=1}^n\cdot\delta_{\alpha_j}(x_{ij})\right)\leq\sum\limits_{j=1}^n\alpha_i^s\alpha_j^sf(x_{ij}), \end{aligned}\] which gives, by addition, that \[\begin{aligned} f\left(\prod_{i,j=1}^n\cdot(\delta_{\alpha_i}\cdot\delta_{\alpha_j}(x_{ij}))\right)\leq \sum\limits_{i=1}^n\alpha_i^sf\left(\prod_{j=1}^n \cdot \delta_{\alpha_j}(x_{ij})\right)\leq \sum\limits_{i,j=1}^n\alpha_i^s\alpha_j^sf(x_{ij}). \end{aligned}\]

Thus \(f\left(\prod_{i,j=1}^n\cdot(\delta_{\alpha_i}\cdot\delta_{\alpha_j}(x_{ij}))\right)\leq A\leq \sum\limits_{i,j=1}^n\alpha_i^s\alpha_j^sf(x_{ij})\). The second part is proved similarly. ◻

Corollary 4. With the above assumptions and supposing that \(x_{ij}\) is symmetric, that is \(x_{ij}=x_{ji}\) for all \(i,j\in(1,\cdots,n)\), one has the inequality \[f\left(\prod_{i,j=1}^n\cdot(\delta_{\alpha_i}\cdot\delta_{\alpha_j}(x_{ij}))\right)\leq \sum\limits_{i=1}^n\alpha_i^sf\left(\prod_{j=1}^n\cdot\delta_{\alpha_j}(x_{ij})\right)\leq \sum\limits_{i,j=1}^n\alpha_i^s\alpha_j^sf(x_{ij}).\]

References

  1. Chen, X. (2002). Some properties of semi-E-convex functions. Journal of Mathematical Analysis and Applications, 275(1), 251-262.

  2. Youness, E. A. (1999). E-convex sets, E-convex functions, and E-convex programming. Journal of Optimization Theory and Applications, 102(2), 439-450.

  3. Yang, X. M. (2001). On E-convex sets, E-convex functions, and E-convex programming. Journal of Optimization Theory and Applications, 109(3), 699.

  4. Orlicz, W. (1961). A note on modular spaces I. Bulletin of the Polish Academy of Sciences, Series of Mathematics, Astronomy, and Physics, 9, 157-162.

  5. Dragomir, S. S., & Fitzpatrick, S. (1997). s-Orlicz convex functions in linear spaces and Jensen’s discrete inequality. Journal of Mathematical Analysis and Applications, 210(2), 419-439.

  6. Capogna, L., Pauls, S. D., & Danielli, D. (2007). An Introduction to the Heisenberg Group and the Sub-Riemannian Isoperimetric Problem. Basel: Birkhäuser Basel.