Search for Articles:

Contents

A game for the parabolic two membranes problem

Alfredo Miranda1
1Departamento de Matemática, FCEyN, Universidad de Buenos Aires, Pabellon I, Ciudad Universitaria (1428), Buenos Aires, Argentina
Copyright © Alfredo Miranda. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper we find viscosity solutions to a system with two parabolic obstacle-type equations that involve two normalized \(p-\)Laplacian operators. We analyze a two-player zero-sum game played on two boards (with different rules in each board), in which at each board one of the two players has the choice of playing in that board or switching to the other board and then play. We prove that the game has a value and show that these value functions converge uniformly (when a parameter that controls the size of the steps made in the game goes to zero) to a viscosity solution of a system in which one component acts as an obstacle for the other component and vice versa. In this way, we find solutions to the parabolic two-membranes problem.

Keywords: viscosity solutions, normalized p-Laplacian, two-membranes problem

1. Introduction

There is a deep connection between partial differential equations and probability. For linear operators, the Laplacian can be connected with Brownian motion or with the limit of random walks as the step size goes to zero (see, for example, [15]). Concerning nonlinear operators, there is a game introduced in [6] called Tug-of-War that is connected with the infinity Laplacian. Later, in [7] and [8], the authors introduce a modification of the game (called Tug-of-War with noise) that is related to the normalized \(p-\)Laplacian. The previously mentioned results were extended to cover very different equations such as Pucci operators, the Monge-Ampere equation, the obstacle problem, etc. For further details, we refer to the recent books [9] and [10]. Concerning parabolic problems with nonlinear operators, we refer to [1113], in which the authors studied the parabolic infinity Laplacian. In [14], an alternative problem called the parabolic biased infinity Laplacian equation is discussed. In relation to game theory, we refer to [15] where the authors described a Tug-of-War game with spatial and time dependence. In [16], and also in the book [9], the authors find a mean value formula for parabolic equations related to Tug-of-War with noise games. There is an increasing interest in games for PDE systems. We quote [1719] for examples of games played on different boards associated with solutions of coupled PDE systems using both linear and nonlinear operators. In [18], the authors introduce a game played on two boards where they impose a time-dependent condition on one board to obtain a solution to a parabolic/elliptic problem.

In this paper, we will focus on the two membranes problem, a classical subject that has been extensively studied in the literature. The stationary version of this problem models the behavior of two elastic membranes clamped at the boundary of a prescribed domain. They are assumed to be ordered, with one membrane above the other, and they are subject to different external forces. Specifically, the membrane on top is pushed down, while the one below is pushed up. The main assumption here is that the two membranes do not penetrate each other (they are assumed to be ordered in the whole domain). This situation can be modeled by two obstacle problems; the lower membrane acts as an obstacle from below for the free elastic equation that describes the location of the upper membrane, while, conversely, the upper membrane is an obstacle from above for the equation for the lower membrane. When the equations that obey the two membranes have a variational structure this problem can be tackled using calculus of variations (one aims to minimize the sum of the two energies subject to the constraint that the functions that describe the position of the membranes are always ordered inside the domain, one is bigger or equal than the other). Once existence of a solution (in an appropriate sense) is obtained a lot of interesting questions arise, like uniqueness, regularity of the involved functions, a description of the contact set, the regularity of the contact set, etc, see [20, 21], the dissertation [22] and references therein. However, when the involved equations are not variational the analysis relies on monotonicity arguments (using the maximum principle). Recently, using game theory, the elliptic two membrane problem was studied in [23] without assuming any variational structure. Our main interest here is to look at the parabolic version of this problem. The parabolic two membranes problem can be interpreted as the evolution in time of two membranes with prescribed initial positions and boundary conditions. These solutions model the behavior of the membranes (one over the other), starting in an initial position. That is, this problem represents the evolution problem for the stationary two-membranes problem.

To approximate solutions to a parabolic two membranes problem, we introduce a two-player zero-sum game played in two parabolic cylinders, then we prove that this game has a value, and these values of the game converge to solutions to the parabolic two membranes system as a parameter that controls the size of the steps of the game goes to zero. Let us describe briefly the game. At each cylinder, we play a Tug-of-War game with noise, varying parameters and running payoffs at each board. That is, with \(\alpha\) probability, the players play Tug-of-War (with probability \(\frac{1}{2}\) Player I chooses the next position of the token, and with probability \(\frac{1}{2}\) Player II chooses the next position, both in the \(\varepsilon\)-ball), and with probability \((1-\alpha)\) the token moves at random in the \(\varepsilon\)-ball. Also there exists a particular rule for changing boards. This is a brief description of the game: At a given point \((x,t)\) in the first board, a fair coin is tossed, if the result is head, the players play Tug-of-War with noise in space in this board, but changing \(t\) to \(t-\varepsilon^2\). On the other hand, if the result is a tail, Player I will decide between playing in the first board, or jump to the second board and play there, always changing \(t\) to the \(t-\varepsilon^2\). Conversely, in the second board the rules are the reverse, with probability \(\frac{1}{2}\) the token remains in the second board and the players play Tug-of-War with noise (with a different set of parameters) changing \(t\) to \(t-\varepsilon^2\), and with probability \(\frac{1}{2}\) Player II decides to stay in the second board and play or to jump to the first board and play, as before, changing \(t\) to \(t-\varepsilon^2\). Regarding the games, at each board the game rules are different (Tug-of-War with noise whit different parameters and running payoffs). The game continues until the token leaves the domain, or the time becomes less than zero, and Player I wins the total payoff, while Player II loses the same amount. This quantity is the sum of the final payoff and the running payoff wich depends on fixed Lipschitz functions. Notice that the final payoff is different depending on the last position of the token (the position outside the domain). Specifically, the payoff varies if the token leaves the domain from the sides or from the bottom of the parabolic cylinder. This situation implies that the boundary condition must be compatible with the initial condition. in both boards. We will prove that the game has a value, given by two functions, \(u^\varepsilon(x,t)\) and \(v^\varepsilon(x,t)\), that encode the expected outcome when the game starts at \((x,t)\) in the first board and in the second board respectively. These value functions verify the following Dynamic Programming Principle (DPP): \[\label{DPP} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)=\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2) +\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} & (x,t) \in \Omega\times (0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t)=\frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} & (x,t) \in \Omega\times (0,T), \end{array} \right. \tag{1}\] with the boundary conditions \[\label{DPPBC} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t) = f(x,t) & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t) = g(x,t) & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \end{array} \right. \tag{2}\] and initial conditions \[\label{DPPIC} \left\lbrace \begin{array}{ll} \displaystyle u^\varepsilon(x,0)=u_0(x) & x\in \Omega,\\[8pt] \displaystyle v^\varepsilon(x,0)=v_0(x) & x\in \Omega. \end{array} \right. \tag{3}\] The operators associated to the two Tug-of-War with noise that appear in the DPP are defined as follows: \[\label{J1} \begin{array}{ll} \displaystyle J_1(w)(x,t) =\alpha_1\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x)}w(y,t) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x)}w(y,t)\right] +(1-\alpha_1) {{\Huge{f}}_{B_{\varepsilon}(x)}}w(y,t)dy+\varepsilon^2h_1(x,t), \end{array} \tag{4}\] and \[\label{J2} \begin{array}{ll} \displaystyle J_2(w)(x,t)=\alpha_2\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x)}w(y,t) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x)}w(y,t)\right] +(1-\alpha_2) {{\Huge{f}}_{B_{\varepsilon}(x)}}w(y,t)dy+\varepsilon^2h_2(x,t). \end{array} \tag{5}\]

Here the functions \(h_1,h_2:\Omega\times[0,T)\rightarrow\mathbb{R}\) are bounded Lipschitz functions, \(f,g:(\mathbb{R}^{N} \backslash \Omega)\times [0,T)\rightarrow \mathbb{R}\) are bounded Lipschitz functions such that \(f\ge g\), and \(u_0 , v_0 : \Omega\rightarrow \mathbb{R}\) are bounded Lipschitz functions with \(u_0\ge v_0\). Notice that \(J_1\) and \(J_2\) are related to the games played on board one and board two respectively. We also assume a compatibility condition on the data: Let us consider \(w_1:[(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})]\rightarrow\mathbb{R}\), \[\label{funcionw1} w_1(x,t) =\left\lbrace \begin{array}{ll} f(x,t) & \ \ \quad x\notin\Omega, t\geq 0, \\[8pt] \displaystyle u_0(x) & \ \ \quad x\in\Omega, t= 0, \end{array} \right. \tag{6}\] and \(w_2:[(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})]\rightarrow\mathbb{R}\), \[\label{funcionw2} w_2(x,t) =\left\lbrace \begin{array}{ll} g(x,t) & \ \ \quad x\notin\Omega, t\geq 0, \\[8pt] \displaystyle v_0(x) & \ \ \quad x\in\Omega, t= 0. \end{array} \right. \tag{7}\] It is clear that \(w_1(x,t)\ge w_2(x,t)\). We also need to impose the following Lipschitz condition, \[\label{Lw} |w_i(x,t)-w_i(y,s)|\leq L(|x-y|+|t-s|), \tag{8}\] for \(i=1,2\). This condition implies that the boundary functions are compatible with the initial conditions. That is, for every \((x_k)_{k\ge 1}\subset\Omega\) such that \(x_k\rightarrow y\) with \(y\in\partial\Omega\) it holds \[\lim_{k\rightarrow\infty}u_0(x_k)=f(y,0) \quad \mbox{and} \quad \lim_{k\rightarrow\infty}v_0(x_k)=g(y,0).\]

Finally, let us describe the assumtions on the domain.

Uniform exterior sphere condition

\(\Omega\) is an open bounded domain, with smooth boundary, in the sense that there exists \(0<\delta<R\) such that for every \(y\in\partial\Omega\), there exists \(z\in\mathbb{R}^N\) such that \(\Omega\subset B_R(z)\backslash B_\delta(z)\) and \(y\in\partial B_\delta(z)\).

Remark 1. From the DPP and the conditions \(f\ge g\) and \(u_0\ge v_0\) we get \[u^{\varepsilon}\ge v^{\varepsilon},\] in \(\mathbb{R}^N \times [0,T)\). In particular, given \((x,t)\in\Omega\times(0,T)\) we have that \[\begin{aligned} \displaystyle u^\varepsilon(x,t)=&\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}\notag\\ \ge &\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)\notag\\ \ge&\frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}=v^\varepsilon(x,t). \end{aligned}\]

The games \(J_1\) and \(J_2\) are associated with an operator called the normalized \(p-\)laplacian operator, defined as follows (see [24]).

Definition 1. Given \(\varphi\) a \({C}^{2,1}\) function such that \(\nabla\varphi(x,t)\neq 0\), we define, for \(p\ge 2\), the normalized \(p-\)laplacian operator as \[\Delta^1_p\varphi(x,t):=\frac{\alpha}{2}\Delta^1_{\infty}\varphi(x,t)+\frac{1-\alpha}{2(N+2)}\Delta\varphi(x,t), \tag{9}\] where \(\alpha\) and \(p\) are related by \[\frac{\alpha}{1-\alpha}=\frac{p-2}{N+2}.\]

Here, the classical Laplacian and the normalized infinity Laplacian \[\Delta^1_{\infty}\varphi:=\langle D^2 \varphi \frac{\nabla \varphi}{|\nabla \varphi|} , \frac{\nabla \varphi}{|\nabla \varphi|} \mathbb{R}angle =|\nabla \varphi|^{-2}\sum_{1\leq i,j\leq N}\varphi_{x_i}\varphi_{x_i x_j}\varphi_{x_j},\] appear.

Let us recall that the classical \(p-\)Laplacian is given by \[\Delta_p u=\mbox{div}(|\nabla u |^{p-2}\nabla u).\]

For \(2\leq p<\infty\), expanding the divergence we can write this operator as a combination of the Laplacian and the normalized infinity Laplacian as follows: \[\label{PLap} \Delta_p u =|\nabla u |^{p-2}\left((p-2)\Delta^1_{\infty}u+\Delta u\right). \tag{10}\]

In [24] the authors proved that \(u:\Omega\rightarrow\mathbb{R}\) verifies then asymptotic mean value formula \[\label{DPPPpLap} u(x)=\alpha\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u(y)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u(y)\right]+(1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)}}u(y)dy + o(\varepsilon^2), \tag{11}\] as \(\varepsilon\to 0\), if and only if \(u\) is a solution to \[\Delta_p u=0. \tag{12}\] in the viscosity sense. Here \(\alpha\) and \(p\) are related by \[\label{alpha-p} \frac{\alpha}{1-\alpha}=\frac{p-2}{N+2}. \tag{13}\]

Regarding this definition, and the mean value formulas \(J_1\) and \(J_2\) defined before, suppose now that \(u:\Omega\times(0,T)\rightarrow\mathbb{R}\) satisfies \[\label{DPPPLap} \begin{array}{ll} \displaystyle u(x,t)=\alpha\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u(y,t-\varepsilon^2)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u(y,t-\varepsilon^2)\right] +(1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)}}u(y,t-\varepsilon^2)dy + o(\varepsilon^2), \end{array} \tag{14}\] for \(\varepsilon>0\) small. If we assume that \(u\) is smooth, using a simple Taylor expansion we have \[\label{asymp-1} {{\Huge{f}}_{B_{\varepsilon}(x)}}u(y,t-\varepsilon^2)dy-u(x,t-\varepsilon^2)=\frac{\varepsilon^2}{2(N+2)}\Delta u(x,t-\varepsilon^2)+o(\varepsilon^2), \tag{15}\] and if \(\nabla u (x,t-\varepsilon^2)\neq 0\), using again a simple Taylor expansion we obtain \[\begin{aligned} \label{asymp-2} \displaystyle &\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u(y,t-\varepsilon^2)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u(y,t-\varepsilon^2)\right]-u(x,t-\varepsilon^2)\notag\\ \displaystyle &\sim \frac{1}{2} u \Big(x+\varepsilon\frac{\nabla u(x,t-\varepsilon^2)}{|\nabla u(x,t-\varepsilon^2)|},t-\varepsilon^2\Big)+\frac{1}{2} u\Big(x-\varepsilon\frac{\nabla u(x,t-\varepsilon^2)}{|\nabla u(x,t-\varepsilon^2)|},t-\varepsilon^2\Big)-u(x,t-\varepsilon^2)\notag\\ \displaystyle &=\frac{\varepsilon^2}{2}\Delta^1_{\infty}u(x,t-\varepsilon^2)+o(\varepsilon^2). \end{aligned} \tag{16}\]

Then, if we come back to (14), add \(-u(x,t-\varepsilon^2)\) at both sides, divide by \(\varepsilon^2\), and take \(\varepsilon\to 0\), we get \[\frac{\partial u}{\partial t}(x,t)=\frac{\alpha}{2}\Delta^1_{\infty}u(x,t)+\frac{(1-\alpha)}{2(N+2)}\Delta u(x,t). \tag{17}\]

That is \[\frac{\partial u}{\partial t}(x,t)=\Delta^1_pu(x,t). \tag{18}\]

This formal computation explains the relation between the formulas in the DPP and the corresponding parabolic equations. We will use viscosity theory to perform this computation in a more rigorous way.

The main result of this paper is that the value of the game (the solution to the DPP) converges uniformly as \(\varepsilon\rightarrow 0\) to a pair of continuous functions \((u,v)\) that is a viscosity solution to the following parabolic system with two different normalized \(p-\)laplacian operators.

Theorem 1. There exists a subsequence of solutions to the DPP (1), denoted as \((u^{\varepsilon_j},v^{\varepsilon_j})\) that converges as \(\varepsilon_j\to 0\) to a pair of continuous functions \((u,v)\). This limit pair is a viscosity solution to the following system \[\label{EDO1} \left\lbrace \begin{array}{ll} \displaystyle u (x,t) \geq v(x,t) & \ (x,t)\in\overline{\Omega}\times [0,T), \\[8pt] \displaystyle \frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)\geq h_1(x,t) & (x,t)\in\Omega\times (0,T), \\[8pt] \displaystyle \frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)\leq h_2(x,t) & (x,t)\in\Omega\times (0,T),\\[8pt] \displaystyle \frac{\partial u}{\partial t}(x,t)-\Delta_p^1 u(x,t)=h_1(x,t) & (x,t)\in(\Omega\times (0,T))\cap\{u>v\},\\[8pt] \displaystyle \frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)=h_2(x,t) & (x,t)\in(\Omega\times (0,T))\cap\{u>v\}, \end{array} \right. \tag{19}\] with the following extra condition, \[\label{ExtCond} \displaystyle \left(\frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)\right)+\left(\frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)\right)= h_1(x,t)+h_2(x,t) \quad (x,t)\in \Omega\times (0,T), \tag{20}\] boundary conditions \[\label{BC} \left\lbrace \begin{array}{ll} \displaystyle u(x,t) = f(x,t) & (x,t) \in \partial \Omega\times [0,T), \\[8pt] \displaystyle v(x,t) = g(x,t) & (x,t) \in \partial \Omega\times [0,T), \end{array} \right. \tag{21}\] and initial conditions \[\label{IC} \left\lbrace \begin{array}{ll} \displaystyle u(x,0)=u_0(x) & x\in\Omega,\\[8pt] \displaystyle v(x,0)=v_0(x) & x\in\Omega. \end{array} \right. \tag{22}\] Here \(p\) and \(q\) are given by \[\frac{\alpha_1}{1-\alpha_1}=\frac{p-2}{N+2} \quad \mbox{and} \quad \frac{\alpha_2}{1-\alpha_2}=\frac{q-2}{N+2}. \tag{23}\]

Remark 2. We can rewrite the system (19) as follows \[\label{EDO2} \left\lbrace \begin{array}{ll} \displaystyle \min\Big\{ \frac{\partial u}{\partial t}(x,t)-\Delta_p^1 u(x,t)-h_1(x,t),(u-v)(x,t)\Big\}=0 \quad & (x,t)\in\Omega\times (0,T), \\[8pt] \displaystyle \max\Big\{ \frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)-h_2(x,t),(v-u)(x,t)\Big\}=0 \quad & (x,t)\in\Omega\times (0,T), \end{array} \right. \tag{24}\] with the extra condition \[\displaystyle \left(\frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)\right)+\left(\frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)\right)= h_1(x,t)+h_2(x,t) \quad (x,t)\in \Omega\times (0,T), \tag{25}\] the bondary conditions (21) and the initial conditions (22).

The main novelty of the paper lies in the introduction of a game formulation for a coupled system consisting of two parabolic equations, corresponding to a two membrane problem. In addition, the authors propose an iterative construction of the value function of the game, which ensures that this function is measurable. The paper is thorough in its proofs, aiming to include as many technical details as possible.

Organization of the paper

In §2 we introduce some preliminary results, including the definition of a viscosity solution to our parabolic system. In §3 we present the rules of the two-players zero-sum game whose value is the solution to the DPP. In §4 we prove that the game has a value, and the value is the unique solution to the DPP. We start with a subsolution to the DPP, and using an iteration sheme we obtain an nondecreasing sequence of subsolutions. This sequence is uniformly bounded and hence it converges. The limit of this sequence is the solution to the DPP. Then we use some specific strategies to obtain sub and supermartingales, and using the Optional Stopping Theorem we get that the solution to the DPP is the value of the game. The proof of Theorem 1 is divided into sections §5 and §6. In the first one we prove uniform convergence along a subsequence using an Arzela-Ascoli type Lemma, and in the second we show that the uniform limit is a viscosity solution to the PDE system (19) with the extra condition (20) using classical viscosity techniques. Finally, in §7 we collect some final remarks on possible extensions of our results.

2. Preliminaries.

In this section we introduce the precise definition of what we understand as a viscosity solution for the system (19). Next, we include the precise statement of the Optional Stopping Theorem that we will need when dealing with the probabilistic part of our arguments.

2.1. Viscosity solutions

We refer to [25] for general results on viscosity solutions.

For the parabolic equations that appear in (19) we introduce the following definition of being a viscosity solution. Fix a function \[P:\Omega\times(0,T)\times\mathbb{R}\times\mathbb{R}^N\times\mathbb{S}^N\to\mathbb{R},\] where \(\mathbb{S}^N\) denotes the set of symmetric \(N\times N\) matrices.We consider the PDE \[\label{eqvissol} P\Big(x,t,\frac{\partial u}{\partial t}(x,t), \nabla u (x,t), D^2u (x,t) \Big)=0, \qquad x \in \Omega, \ t\in(0,T). \tag{26}\]

In our system we use the operator related with the normalized \(p\)-laplacian \[\label{eqvissol.888} P\left(x,t,s,\eta, X \right) = s – \left[\displaystyle \frac{\alpha}{2} \langle X \frac{\eta}{|\eta|} ,\frac{\eta}{|\eta|} \mathbb{R}angle +\frac{1-\alpha}{2(N+2)}trace(X)\right]-h(x,t). \tag{27}\] with \(\alpha\) related with \(p\) as follows \[\frac{\alpha}{1-\alpha}=\frac{p-2}{N+2}.\]

The idea behind Viscosity Solutions is to use the maximum principle in order to “pass derivatives to smooth test functions”. This idea allows us to consider operators in non divergence form. We will assume that \(P\) satisfies two monotonicity properties, \[X\leq Y \text{ in } \mathbb{S}^N \implies P(x,t,s,\eta,X)\geq P(x,t,s,\eta,Y),\] for all \((x,t,s,\eta)\in \Omega\times(0,T)\times\mathbb{R}\times\mathbb{R}^N\); and \[s_1\leq s_2 \text{ in } \mathbb{R} \implies P(x,t,s_1,\eta,X)\leq P(x,t,s_2,\eta,Y),\] for all \((x,t,\eta,X)\in \Omega\times(0,T)\times\mathbb{R}^N \times \mathbb{S}^N\). Here we have equations that involve the \(\infty\)-laplacian that are not well defined when the gradient vanishes. In order to be able to handle this issue, we need to consider the lower semicontinous envelope, \(P_*\), and upper semicontinous envelope, \(P^*\), of \(P\), that are given by \[\begin{array}{ll} P^*(x,t,s,\eta,X)& \displaystyle =\limsup_{(y,l,n,\mathbb{R}ho,Y)\to (x,t,s,\eta,X)}P(y,l,n,\mathbb{R}ho,Y),\\ P_*(x,t,s,\eta,X)& \displaystyle =\liminf_{(y,l,n,\mathbb{R}ho,Y)\to (x,t,s,\eta,X)}P(y,l,n,\mathbb{R}ho,Y). \end{array}\]

These functions coincide with \(P\) at every point of continuity of \(P\) and are lower and upper semicontinous respectively. It is clear that the function \(P(x,t,s,\eta,X)\) defined in (27) is continuous for \(\eta\neq 0\). With these concepts at hand we are ready to state the definition of a viscosity solution to (26).

Definition 2.

(a) An upper semi-continuous function \(u\) is a viscosity subsolution of (26) if for every \(\phi \in C^{(2,1)}(\Omega\times(0,T))\) such that \(\phi\) touches \(u\) at \((x,t) \in \Omega\times(0,T)\) strictly from above (that is, \(u-\phi\) has a strict maximum at \((x,t)\) with \(u(x,t) = \phi(x,t)\)), we have \[P_*\left(x,t,\frac{\partial\phi}{\partial t}(x,t), \nabla\phi(x,t),D^2\phi(x,t) \right)\leq 0.\]

If \(u\) is a subsolution we write \[P\left(x,t,\frac{\partial u}{\partial t}(x,t), \nabla u(x,t),D^2 u(x,t) \right)\leq 0.\]

(b) A lower semi-continuous function \(u\) is a viscosity supersolution of (26) if for every \(\phi \in C^{(2,1)}(\Omega\times(0,T))\) such that \(\phi\) touches \(u\) at \((x,t) \in \Omega\times(0,T)\) strictly from below (that is, \(u-\phi\) has a strict minimum at \((x,t)\) with \(u(x,t) = \phi(x,t)\)), we have \[P^* \left(x,t,\frac{\partial \phi}{\partial t}(x,t),D \phi(x,t),D^2\phi(x,t)\right)\geq 0.\]

When \(u\) is a supersolution we write \[P\left(x,t,\frac{\partial u}{\partial t}(x,t), \nabla u(x,t),D^2 u(x,t) \right)\ge 0.\]

(c) Finally, \(u\) is a viscosity solution of (26) if it is both a sub- and supersolution, and we note \[P\left(x,t,\frac{\partial u}{\partial t}(x,t), \nabla u(x,t),D^2 u(x,t) \right)= 0.\]

As we mentioned before, to deal with our system (19), given a pair of continuous functions \((u,v)\) such that \(u\ge v\), verifies the boundary conditions (21), and initial conditions (22), we just consider (27) with parameters \(0<\alpha_1<1\) and \(0<\alpha_2<1\), \[\begin{aligned} \label{P,Q} \displaystyle P_1(x,t,s,\eta,X) = &s -\left[\displaystyle \frac{\alpha_1}{2} \langle X \frac{\eta}{|\eta|} ,\frac{\eta}{|\eta|} \mathbb{R}angle +\frac{1-\alpha_1}{2(N+2)}trace(X)\right]-h_1(x,t) ,\notag\\ \displaystyle P_2(x,t,s,\eta,X) =&s -\left[\displaystyle \frac{\alpha_2}{2} \langle X \frac{\eta}{|\eta|} ,\frac{\eta}{|\eta|} \mathbb{R}angle +\frac{1-\alpha_2}{2(N+2)}trace(X)\right]-h_2(x,t), \end{aligned} \tag{28}\] and use Definition 2.

2.2. Probability. The optional stopping theorem

We briefly recall (see [5]) that a sequence of random variables \(\{M_{k}\}_{k\geq 1}\) is called a supermartingale (a submartingale) if \[\mathbb{E}[M_{k+1}\arrowvert M_{0},M_{1},…,M_{k}]\leq M_{k} \ \ (\geq).\]

Then, the Optional Stopping Theorem, that we will call (OSTh) in what follows, says: Assume that \(\tau\) is a stopping time such that one of the following conditions hold,

(a) The stopping time \(\tau\) is bounded a.s.,

(b) It holds that \(\mathbb{E}[\tau]<\infty\) and there exists a constant \(c>0\) such that \[\mathbb{E}[M_{k+1}-M_{k}\arrowvert M_{0},…,M_{k}]\leq c,\]

(c) There exists a constant \(C>0\) such that \(|M_{\min \{\tau,k\}}|\leq C\) a.s. for every \(k\).

Then \[\mathbb{E}[M_{\tau}]\leq \mathbb{E} [M_{0}] \ \ (\geq),\] if \(\{M_{k}\}_{k\geq 0}\) is a supermartingale (submartingale). For the proof of this result we refer to [1, 5].

3. Description of the game

Let us describe in detail the game that we are going to study. It is a two-player zero-sum game. The game is played in two boards, that we call board 1 and board 2, that are two copies of \(\mathbb{R}^N\times[0,T)\), where there is a fixed smooth bounded domain \(\Omega\subset \mathbb{R}^N\). We fix two final payoff functions \(f,g:\mathbb{R}^N\backslash \Omega\times[0,T) \rightarrow \mathbb{R}\). These are two Lipschitz functions with \(f\geq g\). Also we have two initial conditions \(u_0,v_0:\Omega\rightarrow\mathbb{R}\), bounded Lipschitz functions such that \(u_0\ge v_0\), and two running payoff functions \(h_1,h_2:\Omega\times[0,T)\rightarrow \mathbb{R}\) (we also assume that they are Lipschitz functions), that will be used in the first and in the second board respectively. Take a positive parameter \(\varepsilon\) that controls the size of the steps at both boards simultanepously. Let us use two games, with different rules associated to two different parabolic \(p-\)Laplacian operators for the first and the second board respectively. To this end, let us fix two numbers \(0<\alpha_i<1\) for \(i=1,2\). In the first board the rules are the following: with \(\alpha_1\) probability we play with Tug-of-War rules descending to the \(t-\varepsilon^2\) level, this means, a fair coin is tossed and the player who wins the coin toss chooses the next position inside the ball \(B_{\varepsilon}(x)\) but descending to the \(t-\varepsilon^2\) level. That is, the next position of the game will be a point that looks like \((y,t-\varepsilon^2)\) with \(y\in B_\varepsilon(x)\), \(y\) chosed by the player who wins the coin toss. On the other hand, with \((1-\alpha_1)\) probability we play with a random walk rule, the next position is chosen at random in \(B_{\varepsilon}(x)\) with uniform probability, but descending again to the \(t-\varepsilon^2\) level. That is, with \((1-\alpha_1)\) probability the next position of the token will be \((y,t-\varepsilon^2)\) with \(y\in B_\varepsilon(x)\) chosen at random. Playing in the first board we add a running payoff of amount \(\varepsilon^2 h_1(x,t-\varepsilon^2)\) (Player I gets \(\varepsilon^2 h_1(x,t-\varepsilon^2)\) and Player II pays the same amount). We call this game the \(J_1\) game. Analogously, in the second board we use \(\alpha_2\) to encode the probability that we play Tug-of-War and \((1-\alpha_2)\) for the probability to move at random, both cases taking the next position in the \(t-\varepsilon^2\) level, this time with a running payoff of amount \(\varepsilon^2 h_2 (x,t-\varepsilon^2)\). We call this game \(J_2\).

To the rules that we described in the two boards \(J_1\) and \(J_2\) we add the following ways of changing boards: in the first board, with probability \(\frac{1}{2}\) the game remains in the first board and play the \(J_1\) game, and with probability \(\frac{1}{2}\) Player I decides to play with \(J_1\) rules (and the game position remains at the first board) or to change boards and then the new position of the token is chosen playing the \(J_2\) game rule in the second board. In the second board the rule is just the opposite, in this case, with probability \(\frac{1}{2}\) the token remains in the second board and the \(J_2\) game is played, and with probability \(\frac{1}{2}\) Player II decides to play with \(J_2\) game rules (and remains at the second board) or to change boards and play in the first board with the \(J_1\) game rules.

The game starts with a token at an initial position \((x_0,t_0)\in\Omega\times(0,T)\) in one of the two boards. After the first play the game continues with the same rules described before. This gives a random sequence of points (positions of the token) and a stopping time \(\tau\) (the first time that the position of the token is outside \(\Omega\times(0,T)\) in any of the two boards). The sequence of positions will be denoted by \[\Big\{(x_0,t_0,j_0),(x_1,t_1,j_1),\dots (x_\tau,t_\tau, j_\tau) \Big\},\] here \((x_k,t_k)\in\Omega\times(0,T)\) for \(0\le k\le \tau-1\) (and \((x_\tau,t_\tau)\notin\Omega\times(0,T)\)) and the third variable, \(j_k\in\{1,2\}\), is just an index that indicates in which board we are playing, \(j_k=1\) if the position of the token is in the first board, and \(j_k=2\) if the token is in the second board. As we mentioned, the game ends when the token leaves \(\Omega\times(0,T)\) at some point \((x_{\tau},t_\tau,j_\tau)\) (let us observe that if \(t_{\tau-1}-\varepsilon^2<0\) we will consider \(t_\tau=0\)). In this case the final payoff (the amount that Player I gets and Player II pays) is given by \(w_1(x_\tau,t_\tau)\) if \(j_\tau =1\), where \(w_1\) is the function defined in (6) (the token leaves the domain in the first board) and \(w_2(x_\tau,t_\tau)\), where \(w_2\) was defined in (7) if \(j_\tau =2\) (the token leaves in the second board). Hence, taking into account the running payoff and the final payoff, the total payoff of a particular occurrence of the game is given by \[\begin{aligned} \displaystyle \mbox{total payoff} : =& w_1(x_{\tau},t_\tau)\chi_{\{j=1\}}(j_{\tau})+w_2(x_{\tau},t_\tau)\chi_{\{j=2\}}(j_{\tau})\notag\\ &+\varepsilon^2\sum_{k=0}^{\tau -1}\Big(h_1 (x_k,t_{k+1})\chi_{\{j=1\}}(j_{k+1})+h_2 (x_k,t_{k+1})\chi_{\{j=2\}}(j_{k+1})\Big). \end{aligned} \tag{29}\]

Notice that the total payoff is the sum of the final payoff (given by \(w_1(x_{\tau},t_\tau)\) or by \(w_2(x_{\tau},t_\tau)\) according to the board at which the token leaves the domain) and the running payoff that is given by \(\varepsilon^2 h_1(x_k,t_{k+1})\) and \(\varepsilon^2 h_2(x_k,t_{k+1})\) according to the board in which we play at each step.

Now, the players fix two strategies, \(S_{I}\) for Player I and \(S_{II}\) for Player II. That is, both players decide to play or to change boards in the respective board if necessary, and in each board they select the point to go provided the coin toss of the Tug-of-War game is favorable. Then, once we fix the strategies \(S_{I}\) and \(S_{II}\), everything depends only on the underlying probability: the fair-coin toss that the players use to set the possibility to decide to change the board (with probability 1/2–1/2), then the coin toss that decides when to play Tug-of-War and when to move at random (remark that this probability is given by \(\alpha_1\) or \(\alpha_2\) and it is different in the two boards) and the coin toss (with probability 1/2–1/2) that decides who choses the next position of the game if the Tug-of-War game is played. With respect to this underlying probability, with fixed strategies \(S_{I}\) and \(S_{II}\), we can compute the expected final payoff starting at \((x,t,j)\) (recall that \(j=1,2\) indicates the board at which is the position of the game), \[\mathbb{E}_{S_{I},S_{II}}^{(x,t,j)}[\mbox{total payoff}].\]

The game is said to have a value if \[\label{value} \Omega^{\varepsilon}(x,t,j)=\sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x,t,j)}[\mbox{total payoff}] = \inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x,t,j)}[\mbox{total payoff}]. \tag{30}\]

Notice that this value \(\Omega^\varepsilon\) is the best possible expected outcome that Player I and Player II may expect to obtain playing their best. Here we prove that this game has a value. The value of the game, \(\Omega^\varepsilon\), is composed in fact by two functions, the first one defined in the first board, \[u^{\varepsilon}(x,t) := \Omega^\varepsilon (x,t,1),\] that is the expected outcome of the game if the initial position is at the first board (and the players play their best) and \[v^{\varepsilon}(x,t) := \Omega^\varepsilon (x,t,2),\] that is the expected outcome of the game when the initial position is in the second board. It turns out that these two functions \(u^\varepsilon\), \(v^\varepsilon\) satisfy a system of equations that is called the Dynamic Programming Principle (DPP) in the literature. In our case, the corresponding DPP for the game is given by \[\left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)=\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} \quad & (x,t) \in \Omega\times (0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t)=\frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} \quad & (x,t) \in \Omega\times (0,T), \end{array} \right. \tag{31}\] with the boundary conditions \[\left\lbrace \begin{array}{ll} u^{\varepsilon}(x,t) = f(x,t) \quad & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \\[8pt] v^{\varepsilon}(x,t) = g(x,t) \quad & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \end{array} \right. \tag{32}\] and the initial conditions \[\left\lbrace \begin{array}{ll} u^\varepsilon(x,0)=u_0(x) \quad & x\in \Omega,\\[8pt] v^\varepsilon(x,0)=v_0(x) \quad & x\in \Omega. \end{array} \right. \tag{33}\]

Remark 3. Observe that the DPP reflects the rules for the game described above. That is, with probability \(\frac{1}{2}\) the game remains in the board where it is and play the corresponding game, and the \(\max\) and \(\min\) that appear in the DPP corresponds to the choices of the players to change board (or not). In the first board the Player I (who aims to maximize the expected outcome) is the one who decides, while in the second board the Player II (that wants to minimize) decides. The games \(J_1\) and \(J_2\) defined in (4) and (5) shows that, at each board, players play Tug-of-War with noise with two differents parameters (\(\alpha_1\) and \(\alpha_2\)) and running payoff \(h_1\) and \(h_2\) respectivily.

In the next section we will prove that there exists a unique solution to the DPP and this solution is the value of the game that we just described.

4. Existence and uniqueness for the DPP

In this section we will prove that the value of the game is the solution to the DPP. But before that, let us prove that there exists a solution to the DPP (1). To this end we introduce an auxiliary function. As \(\Omega\subset \mathbb{R}^N\) is bounded there exists \(R>0\) such that \(\Omega \subset \subset B_R(0)\). Given \(K>0\) and \(M>0\) two constants, let us consider the function \[z_0(x,t)=\left\lbrace \begin{array}{ll} \displaystyle 2K(|x|^2-R^2)-M \quad & \ \ (x,t) \in B_R(0)\times(0,T), \\[7pt] \displaystyle -M \quad & \ \ (x,t) \in \left(\mathbb{R}^N \setminus B_R(0)\times[0,T)\right)\cup\left(B_R(0)\times\{0\}\right). \end{array} \right. \tag{34}\]

This function has the following properties: The function \(z_0\) is \({C}^{2,1}(\Omega\times(0,T))\), is bounded (\(\lVert z_0\mathbb{R}Vert_{\infty}\leq 2KR^2+M\)) and, since \(z_0\) is radial, it holds that \[\Delta z_0= \frac{\partial^2 z_0}{\partial r^2} + \left(\frac{N-1}{r}\right) \frac{\partial z_0}{\partial r} = 4K + \left(\frac{N-1}{r}\right)4Kr = 4KN,\] and \[\Delta^1_{\infty} z_0=\frac{\partial^2 z_0}{\partial r^2}=4K,\] inside \(\Omega\). Then, we get \[\Delta^1_p z_0=\frac{\alpha_1}{2}(4K)+\frac{(1-\alpha_1)}{2(N+2)}\left(4KN\right)\geq K,\] and \[\Delta^1_q z_0=\frac{\alpha_2}{2}(4K)+\frac{(1-\alpha_2)}{2(N+2)}\left(4KN\right)\geq K.\]

Finally, this function verifies \[\frac{\partial z_0}{\partial t}=0.\]

The following Lemma proves that this function is a subsolution to the DPP.

Lemma 1. Let \[K=\max\{\lVert h_1\mathbb{R}Vert_{\infty},\lVert h_2\mathbb{R}Vert_{\infty}\}+2,\] and \[M=\max\{\lVert f\mathbb{R}Vert_{\infty},\lVert g\mathbb{R}Vert_{\infty},\lVert u_0\mathbb{R}Vert_{\infty},\lVert v_0\mathbb{R}Vert_{\infty}\}.\]

If we consider the function \(z_0\) with these two constants, for \(\varepsilon\) small enough the pair \((z_0,z_0)\) is a subsolution to the DPP (1). That is, \[\label{DPPSub} \left\lbrace \begin{array}{ll} \displaystyle z_0(x,t)\leq \frac{1}{2} J_1(z_0)(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(z_0)(x,t-\varepsilon^2), J_2(z_0)(x,t-\varepsilon^2)\Big\} \quad & (x,t) \in \Omega\times(0,T), \\[8pt] \displaystyle z_0(x,t)\leq \frac{1}{2} J_2(z_0)(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(z_0)(x,t-\varepsilon^2), J_2(z_0)(x,t-\varepsilon^2)\Big\} & (x,t) \in \Omega\times(0,T), \end{array} \right. \tag{35}\] with the boundary conditions \[\label{BCS} \left\lbrace \begin{array}{ll} z_0(x,t) \leq f(x,t) \quad & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \\[8pt] z_0(x,t) \leq g(x,t) \quad & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \end{array} \right. \tag{36}\] and the initial conditions \[\label{ICS} \left\lbrace \begin{array}{ll} z_0(x,0) \leq u_0(x) \quad & x \in \Omega, \\[8pt] z_0(x,0) \leq v_0(x) \quad & x \in \Omega. \end{array} \right. \tag{37}\]

Proof. First, we observe that the inequalites (36) and (37) holds for \((x,t)\in\mathbb{R}^{N}\backslash\Omega\times[0,T)\) and \((x,t)\in\Omega\times\{0\}\). Let us recall the definition of \(J_1\) and \(J_2\) \[\label{J11} \begin{array}{ll} \displaystyle J_1(w)(x,t)=\alpha_1\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x)}w(y,t) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x)}w(y,t)\right] +(1-\alpha_1) {{\Huge{f}}_{B_{\varepsilon}(x)}}w(y,t)dy+\varepsilon^2h_1(x,t), \end{array} \tag{38}\] and \[\label{J22} \begin{array}{ll} \displaystyle J_2(w)(x,t)=\alpha_2\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x)}w(y,t) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x)}w(y,t)\right]+(1-\alpha_2) {{\Huge{f}}_{B_{\varepsilon}(x)}}w(y,t)dy+\varepsilon^2h_2(x,t). \end{array} \tag{39}\]

Then, let us prove the following claim:
Claim. For \((x,t)\in\Omega\times(0,T)\) it holds that \[\label{dppz0} z_0(x,t)\leq \min\Big\{J_1(z_0)(x,t-\varepsilon^2),J_2(z_0)(x,t-\varepsilon^2)\Big\}-\varepsilon^2. \tag{40}\] Proof of the Claim. We aim to show that \[\label{rec1} \varepsilon^2 \leq \min\Big\{J_1(z_0)(x,t-\varepsilon^2)-z_0(x,t),J_2(z_0)(x,t-\varepsilon^2)-z_0(x,t)\Big\}. \tag{41}\] Using Taylor’s expansions we obtain \[\begin{align} \displaystyle J_1(z_0)(x,t\! -\! \varepsilon^2)\! -\! z_0(x,t)=&\alpha_1\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}(z_0(y,t\! -\! \varepsilon^2)\! -\! z_0(x,t\! -\! \varepsilon^2))+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}(z_0(y,t-\varepsilon^2)\! -\! z_0(x,t\! -\! \varepsilon^2))\right]\notag\\ &\displaystyle \ +(1\! -\! \alpha_1){\Huge{f}}_{B_{\varepsilon}(x)}(z_0(y,t\! -\! \varepsilon^2)\! -\! z_0(x,t\! -\! \varepsilon^2))dy+z_0(x,t\! -\! \varepsilon^2)\! -\! z_0(x,t)+\varepsilon^2 h_1(x,t\! -\! \varepsilon^2)\notag\\ \displaystyle =&\Big(\! -\! \frac{\partial z_0}{\partial t}(x,t)+\frac{\alpha_1}{2}\Delta^1_{\infty}z_0(x,t\! -\! \varepsilon^2)+\frac{(1\! -\! \alpha_1)}{2(N+2)}\Delta z_0(x,t\! -\! \varepsilon^2)\Big)\varepsilon^2+\varepsilon^2 h_1(x,t\! -\! \varepsilon^2) +o(\varepsilon^2). \end{align} \tag{42}\]

Analogously, \[\begin{aligned} \displaystyle J_2(z_0)(x,t-\varepsilon^2)-z_0(x,t) =&\Big(-\frac{\partial z_0}{\partial t}(x,t)+\frac{\alpha_2}{2}\Delta^1_{\infty}z_0(x,t-\varepsilon^2) +\frac{(1-\alpha_2)}{2(N+2)}\Delta z_0(x,t-\varepsilon^2)\Big)\varepsilon^2\notag\\ &+\varepsilon^2 h_2(x,t-\varepsilon^2) +o(\varepsilon^2). \end{aligned} \tag{43}\]

If we come back to (41) and we divide by \(\varepsilon^2\), we obtain that it is necesary to prove that it holds \[\label{maxdosop1} \begin{array}{ll} \displaystyle 1\leq\min\Big\{-\frac{\partial z_0}{\partial t}(x,t)+\Delta^1_p z_0(x,t-\varepsilon^2)+h_1(x,t-\varepsilon^2),-\frac{\partial z_0}{\partial t}(x,t)+\Delta^1_q z_0(x,t-\varepsilon^2)+h_2(x,t-\varepsilon^2)\Big\}+\frac{o(\varepsilon^2)}{\varepsilon^2}, \end{array} \tag{44}\] for \(\varepsilon>0\) small enough. Using the properties of \(z_0\) we have \[-\frac{\partial z_0}{\partial t}(x,t)+\Delta^1_p z_0(x,t-\varepsilon^2)+h_1(x,t-\varepsilon^2)\ge K+h_1(x,t-\varepsilon^2)\ge 2>1, \tag{45}\] and \[-\frac{\partial z_0}{\partial t}(x,t)+\Delta^1_q z_0(x,t-\varepsilon^2)+h_2(x,t-\varepsilon^2)\ge K+h_2(x,t-\varepsilon^2)\ge 2>1. \tag{46}\]

Thus, the inequality (44) holds for \(\varepsilon\) small enough.

This claim implies that \((z_0,z_0)\) is subsolution to the DPP (1). This ends the proof. ◻

Now, starting with \(u^{\varepsilon}_0=v^{\varepsilon}_0=z_0\) we will define inductively for \(n\geq 0\) \[\label{Ind} \left\lbrace \begin{array}{ll} \displaystyle u_{n+1}^{\varepsilon}(x,t)= \frac{1}{2} J_1(u_n^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_n^{\varepsilon})(x,t-\varepsilon^2), J_2(v_n^{\varepsilon})(x,t-\varepsilon^2)\Big\} , & (x,t) \in \Omega\times(0,T), \\[8pt] \displaystyle v_{n+1}^{\varepsilon}(x,t)= \frac{1}{2} J_2(v_n^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u_n^{\varepsilon})(x,t-\varepsilon^2), J_2(v_n^{\varepsilon})(x,t-\varepsilon^2)\Big\}, & (x,t) \in \Omega\times(0,T), \end{array} \right. \tag{47}\] with boundary conditions \[\label{BCInd} \left\lbrace \begin{array}{ll} u_{n+1}^{\varepsilon}(x,t) = f(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \\[8pt] v_{n+1}^{\varepsilon}(x,t)= g(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \end{array} \right. \tag{48}\] and initial conditions \[\label{ICInd} \left\lbrace \begin{array}{ll} u_{n+1}^{\varepsilon}(x,0)= u_0(x), & x \in \Omega, \\[8pt] v_{n+1}^{\varepsilon}(x,0)= v_0(x), & x \in \Omega. \end{array} \right. \tag{49}\]

Let us prove the following Lemma.

Lemma 2. The sequence \(\{(u^\varepsilon_n,v^\varepsilon_n)\}_{n\ge 0}\) verifies:

(a) \(u_n^{\varepsilon}\leq u_{n+1}^{\varepsilon}\) and \(v_n^{\varepsilon}\leq v_{n+1}^{\varepsilon}\) for all \(n\geq 0\).

(b) the pair \((u^\varepsilon_n,v^\varepsilon_n)\) is a subsolution to the DPP (1) for all \(n\geq 0\).

Proof. (a) By induction: \[\displaystyle u_{1}^{\varepsilon}(x,t)= \frac{1}{2} J_1(u_0^{\varepsilon})(x,t-\varepsilon^2) \displaystyle +\frac{1}{2}\max\Big\{ J_1(u_0^{\varepsilon})(x,t-\varepsilon^2), J_2(v_0^{\varepsilon})(x,t-\varepsilon^2)\Big\}\ge u^\varepsilon_0(x,t), \tag{50}\] for \((x,t)\in\Omega\times(0,T)\). Here we used that \(u^\varepsilon_0=v^\varepsilon_0=z_0\) and that \((z_0,z_0)\) is a subsolution to the DPP.

Analogously, \[\displaystyle v_{1}^{\varepsilon}(x,t)= \frac{1}{2} J_2(v_0^{\varepsilon})(x,t-\varepsilon^2) \displaystyle +\frac{1}{2}\min\Big\{ J_1(u_0^{\varepsilon})(x,t-\varepsilon^2), J_2(v_0^{\varepsilon})(x,t-\varepsilon^2)\Big\}\ge v^\varepsilon_0(x,t). \tag{51}\]

Outside the domain it is clear that \[\begin{aligned} \begin{cases} u_{1}^{\varepsilon}(x,t) =& f(x,t)\ge u^\varepsilon_0(x,t), \qquad\qquad (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T),\\ v_{1}^{\varepsilon}(x,t)=& g(x,t)\ge v^\varepsilon_0(x,t) , \qquad\qquad (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T),\\ u_{1}^{\varepsilon}(x,0)=& u_0(x)\ge u^\varepsilon_0(x,0), \qquad\qquad~~ x \in \Omega, \\ v_{1}^{\varepsilon}(x,0)=& v_0(x)\ge v^\varepsilon_0(x,0), \qquad\qquad~~ x \in \Omega.\end{cases} \end{aligned} \tag{52}\]

Now, let us deal with the inductive step, \(u^\varepsilon_n\le u^\varepsilon_{n-1}\) and \(v^\varepsilon_n\le v^\varepsilon_{n-1}\). The definition of \(J_1\) and \(J_2\) implies that \[J_1(u^\varepsilon_n)(x,t-\varepsilon^2)\ge J_1(u^\varepsilon_{n-1})(x,t-\varepsilon^2) \quad \mbox{and} \quad J_2(v^\varepsilon_n)(x,t-\varepsilon^2)\ge J_2(v^\varepsilon_{n-1})(x,t-\varepsilon^2).\]

Hence, \[\begin{aligned} \displaystyle u_{n+1}^{\varepsilon}(x,t)=& \frac{1}{2} J_1(u_n^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_n^{\varepsilon})(x,t-\varepsilon^2), J_2(v_n^{\varepsilon})(x,t-\varepsilon^2)\Big\}\notag\\ \displaystyle \ge& \frac{1}{2} J_1(u_{n-1}^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_{n-1}^{\varepsilon})(x,t-\varepsilon^2), J_2(v_{n-1}^{\varepsilon})(x,t-\varepsilon^2)\Big\}=u^\varepsilon_n(x,t). \end{aligned} \tag{53}\]

Analogously, we get \(v^\varepsilon_{n+1}\ge v^\varepsilon_n\).

(b) \((z_0,z_0)\) is subsolution thanks to Lemma 1. Using (a) and the monotonicity of \(J_1\) and \(J_2\) we get \[\begin{aligned} \displaystyle \frac{1}{2} J_1&(u_{n+1}^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_{n+1}^{\varepsilon})(x,t-\varepsilon^2), J_2(v_{n+1}^{\varepsilon})(x,t-\varepsilon^2)\Big\}\notag\\ \displaystyle &\ge \frac{1}{2} J_1(u_{n}^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_{n}^{\varepsilon})(x,t-\varepsilon^2), J_2(v_{n}^{\varepsilon})(x,t-\varepsilon^2)\Big\}=u^\varepsilon_{n+1}(x,t). \end{aligned} \tag{54}\]

Analogously for \(v^\varepsilon_n\). This ends the proof. ◻

Let us prove that the sequences are uniformly bounded. To this end we consider \(w_0=-z_0\). This function is bounded and verifies \[\label{DPP2} \begin{array}{ll} \displaystyle w_0 (x,t) \geq \max \Big\{J_1(w_0)(x,t-\varepsilon^2),J_2(w_0)(x,t-\varepsilon^2)\Big\}+\varepsilon^2, & \ (x,t) \in \Omega\times(0,T), \end{array} \tag{55}\] for \(\varepsilon>0\) small enough. Then we have the following Lemma.

Lemma 3. \[u^\varepsilon_n\le w_0 \quad \mbox{and} \quad v^\varepsilon_n\le w_0, \tag{56}\] for all \(n\ge 0\).

Proof. It is clear the the inequality holds outside the domain \(\Omega\times(0,T)\). Inside the domain we argue by contradiction. Suposse that there exists \(n_0\in\mathbb{N}\) such that \[\max\Big\{\sup_{\Omega\times(0,T)}(u^\varepsilon_{n_0}-w_0),\sup_{\Omega\times(0,T)}(v^\varepsilon_{n_0}-w_0)\Big\}=\sup_{\Omega\times(0,T)}(u^\varepsilon_{n_0}-w_0)=\theta>0.\]

Let \((x_k,t_k)\in\Omega\times(0,T)\) such that \[\theta-\frac{1}{k}<(u^\varepsilon_{n_0}-w_0)(x_k,t_k).\]

Using the inequalities we get \[\begin{aligned} \label{dossup} \displaystyle \theta-\frac{1}{k}<&(u^\varepsilon_{n_0}-w_0)(x_k,t_k)\notag\\ \le&\max \Big\{J_1(u^\varepsilon_{n_0})(x_k,t_k-\varepsilon^2),J_2(v^\varepsilon_0)(x_k,t_k-\varepsilon^2)\Big\} \displaystyle-\max \Big\{J_1(w_0)(x_k,t_k-\varepsilon^2),J_2(w_0)(x_k,t_k-\varepsilon^2)\Big\}-\varepsilon^2 \notag\\ \displaystyle \le& \max \Big\{(J_1(u^\varepsilon_{n_0})-J_1(w_0))(x_k,t_K-\varepsilon^2),(J_2(v^\varepsilon_{n_0})-J_2(w_0))(x_k,t_k-\varepsilon^2)\Big\}-\varepsilon^2. \end{aligned} \tag{57}\]

Here we used that \(\max\{a,b\}-\max\{c,d\}\le\max\{a-c,b-d\}\). Let us consider the inequalities \[\inf_{y \in B_{\varepsilon}(x_k)}u^\varepsilon_{n_0}(y,t_k-\varepsilon^2)-\inf_{y \in B_{\varepsilon}(x_k)}w_0(y,t_k-\varepsilon^2)\le \sup_{y \in B_{\varepsilon}(x_k)}(u^\varepsilon_{n_0}-w_0)(y,t_k-\varepsilon^2)\le \theta, \tag{58}\] and \[\sup_{y \in B_{\varepsilon}(x_k)}u^\varepsilon_{n_0}(y,t_k-\varepsilon^2)-\sup_{y \in B_{\varepsilon}(x_k)}w_0(y,t_k-\varepsilon^2)\le \sup_{y \in B_{\varepsilon}(x_k)}(u^\varepsilon_{n_0}-w_0)(y,t_k-\varepsilon^2)\le\theta, \tag{59}\] and finally \[ {{\Huge{f}}_{B_{\varepsilon}(x_k)}}(u^\varepsilon_{n_0}-w_0)(y,t_k-\varepsilon^2)dy\le \theta. \tag{60}\]

Hence, using again the definition of \(J_1\) (38) and \(J_2\) (39) we get that \[J_1(u^\varepsilon_{n_0})(x,t-\varepsilon^2)-J_1(w_0)(x,t-\varepsilon^2)\le\theta\quad \mbox{and} \quad J_2(v^\varepsilon_{n_0})(x,t-\varepsilon^2)-J_2(w_0)(x,t-\varepsilon^2)\le\theta. \tag{61}\]

If we come back to (57) we get \[\theta-\frac{1}{k}+\varepsilon^2<(u^\varepsilon_{n_0}-w_0)(x_k,t_k)\le\theta, \tag{62}\] which is a contradiction if \(k\in\mathbb{N}\) is large enough. This ends the proof. ◻

Finally, we conclude the following result.

Corollary 1. There exists a constant \(\Lambda>0\) such that \[u_n^{\varepsilon}\leq \Lambda \quad \mbox{and} \quad v_n^{\varepsilon}\leq \Lambda, \tag{63}\] for all \(n\geq 0\).

Now we are ready to prove the existence of a solution to the DPP. Let us start noticing that since the sequences \(u^\varepsilon_n\) and \(v^\varepsilon_n\) are nondecreasing and bounded and hence the following limits exist \[\label{DEFuv} u^{\varepsilon}(x,t):=\lim_{n\rightarrow \infty}u_n^{\varepsilon}(x,t) \quad \mbox{and} \quad v^{\varepsilon}(x,t):=\lim_{n\rightarrow \infty}v_n^{\varepsilon}(x,t). \tag{64}\]

Theorem 2. The pair \((u^\varepsilon,v^\varepsilon)\) is a solution to the DPP (1).

To prove this theorem we will first prove a technical lemma for a single equation that has its own interest.

Lemma 4. Consider the following DPP \[\label{TOWN} \left\lbrace \begin{array}{ll} \displaystyle \! u^{\varepsilon}(x)= \alpha\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u^\varepsilon(y)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u^\varepsilon(y)\right]+(1-\alpha){\Huge{f}}_{B_{\varepsilon}(x)}u^\varepsilon(y)dy, & x \in \Omega, \\[8pt] \! u^{\varepsilon}(x)= f(x), & x \in \mathbb{R}^N\backslash\Omega, \end{array} \right.\tag{65}\] with \(f\) a bounded Lipschitz function. Take \(M=\lVert f\mathbb{R}Vert_\infty\). Then \[u_0^\varepsilon(x)=\left\lbrace \begin{array}{ll} \displaystyle -M , & x \in \Omega, \\[8pt] f(x), & x \in \mathbb{R}^N\backslash\Omega, \end{array} \right.\] is a subsolution to (65). Let us consider the the following iteration for \(n\ge 0\) \[\label{TOWNit} \left\lbrace \begin{array}{ll} \displaystyle \! u_{n+1}^{\varepsilon}(x)\! = \! \alpha\left[\frac{1}{2} \! \sup_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)+\frac{1}{2} \! \inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)\right]\! + \! (1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)}}\! \! u_n^\varepsilon(y)dy , & x \in \Omega, \\[12pt] \! u_{n+1}^{\varepsilon}(x)\! = \! f(x), & x \in \mathbb{R}^N\backslash\Omega. \end{array} \right. \tag{66}\]

This sequence \((u_n)_{n\ge 0}\) is nondecreasing and uniformly bounded (\(\lVert u_n\mathbb{R}Vert_\infty\le M\) for all \(n\ge 0\)). Finally, the function \[u^\varepsilon(x):=\lim_{n\rightarrow \infty}u_n^\varepsilon(x) , \tag{67}\] is a solution to the DPP (65).

Proof. Let us start proving that \(u^\varepsilon_0\) is a subsolution. That is \[,\] for \(x\in\Omega\). Here we used that \[-M\le\sup_{y \in B_{\varepsilon}(x)}u_0^\varepsilon(y) \quad \mbox{and} \quad -M\le\inf_{y \in B_{\varepsilon}(x)}u_0^\varepsilon(y),\] and finally \[-M\le {{\Huge{f}}_{B_{\varepsilon}(x)}}u_0^\varepsilon(y)dy.\]

On the other hand \(u^\varepsilon_0(x)=f(x)\) for \(x\in\mathbb{R}^N\backslash\Omega\). The sequence defined by (66) is nondecreasing and composed by subsolutions to the DPP (65) (see Lemma 2).

Let us prove that the sequence is uniformly bounded. In fact, we have that \[u^\varepsilon_n\le M, \tag{68}\] for all \(n\ge 0\). We use an inductive argument. It is clear that \(u^\varepsilon_0\le M\). Suppose that \(u_n\le M\). Using that \(\sup_{y \in B_{\varepsilon}(x)}u_{n}^\varepsilon(y)\le M\), \(\inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)\le M\) and \( {{\Huge{f}}_{B_{\varepsilon}(x)}}u_0^\varepsilon(y)dy\le M\) we get \(u_{n+1}\le M\).

Now, let us show that \(u^\varepsilon(x):=\lim_{n\rightarrow \infty}u_n^\varepsilon(x)\) is a solution to (65). It is clear that if \(x\in\mathbb{R}^N\backslash\Omega\) \[u^\varepsilon(x)=\lim_{n\rightarrow \infty}u_n^\varepsilon(x)=f(x).\]

For \(x\in\Omega\), let us consider \[\begin{aligned} \label{dosu} \displaystyle (u_{n+1}^{\varepsilon}-u_n^\varepsilon)(x)=& \alpha\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)-\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u_{n-1}^\varepsilon(y)-\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u_{n-1}^\varepsilon(y)\right]\notag\\ & +(1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)}}(u_n^\varepsilon-u_{n-1}^\varepsilon)(y) dy. \end{aligned} \tag{69}\]

If we define \[C_n:=\lVert u_n^\varepsilon-u_{n-1}^\varepsilon\mathbb{R}Vert_{L^\infty(\Omega)}.\] Using (69) and the inequalities \[\sup_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)-\sup_{y \in B_{\varepsilon}(x)}u_{n-1}^\varepsilon(y)\le\sup_{y \in B_{\varepsilon}(x)}(u^\varepsilon_n-u^\varepsilon_{n-1})(y), \tag{70}\] and \[\inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)-\inf_{y \in B_{\varepsilon}(x)}u_{n-1}^\varepsilon(y)\le\sup_{y \in B_{\varepsilon}(x)}(u^\varepsilon_n-u^\varepsilon_{n-1})(y), \tag{71}\] we get \[(u_{n+1}^{\varepsilon}-u_n^\varepsilon)(x)\le \alpha C_n+(1-\alpha)C_n.\]

Thus, \(C_{n+1}\le C_n\).

Now, let us consider the following set \[\Gamma_1=\Big\{x\in\Omega : d(x,\partial\Omega)<\frac{\varepsilon}{2}\Big\}. \tag{72}\]

Using the assumed regularirty on \(\partial\Omega\) (uniform exterior sphere condition) we have \[\eta_1=\sup_{x\in\Gamma_1}\frac{|B_\varepsilon(x)\cap\Omega|}{|B_\varepsilon(x)|}<1.\]

Given \(x\in\Gamma_1\), we get \[(u^\varepsilon_{n+1}-u^\varepsilon_n)(x)\le \alpha C_n+(1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)\cap\Omega}}(u^\varepsilon_n-u^\varepsilon_{n-1})(y)dy\le \alpha C_n+(1-\alpha)\eta_1 C_n. \tag{73}\]

Here we used that \[ {{\Huge{f}}_{B_{\varepsilon}(x)}}(u^\varepsilon_n-u^\varepsilon_{n-1})(y)dy=\frac{1}{|B_{\varepsilon}|}\int_{B_{\varepsilon}(x)\cap\Omega}(u^\varepsilon_n-u^\varepsilon_{n-1})(y)dy,\] since \((u^\varepsilon_n-u^\varepsilon_{n-1})(x)=0\) when \(x\in\mathbb{R}^N\backslash\Omega\). Thus, \[\label{T1} (u^\varepsilon_{n+1}-u^\varepsilon_n)(x)\le (\alpha+(1-\alpha)\eta_1) C_n\le\theta_1 C_n, \tag{74}\] with \(\theta_1=\alpha+(1-\alpha)\eta_1<1\) for all \(x\in\Gamma_1\). Let us continue with \[\Gamma_2=\Big\{x\in\Omega : d(x,\Gamma_1)<\frac{\varepsilon}{2}\Big\}. \tag{75}\]

Notice that \(\Gamma_1\subset\Gamma_2\). Let us define \[\eta_2=\sup_{x\in\Gamma_2}\frac{|B_\varepsilon(x)\cap(\Omega\backslash\Gamma_1)|}{|B_\varepsilon(x)|}<1.\]

Here we used again the uniform exterior sphere condition. Given \(x\in\Gamma_2\) we obtain \[\begin{aligned} \displaystyle (u^\varepsilon_{n+2}-u^\varepsilon_{n+1})(x)\le&\alpha C_{n+1}+(1-\alpha)\frac{1}{|B_\varepsilon(x)|}\left[\int_{B_{\varepsilon}(x)\cap(\Omega\backslash\Gamma_1)}(u^\varepsilon_{n+1}-u^\varepsilon_n)(y)dy+\int_{B_{\varepsilon}(x)\cap\Gamma_1}(u^\varepsilon_{n+1}-u^\varepsilon_n)(y)dy\right]\notag\\ \displaystyle \le&\alpha C_n+(1-\alpha)\left[\eta_2 C_n+(1-\eta_2)\theta_1C_n\right]=\left[\alpha+(1-\alpha)[\eta_2+(1-\eta_2)\theta_1]\right]C_n=\theta_2 C_n. \end{aligned} \tag{76}\] where \(\theta_2=\alpha+(1-\alpha)[\eta_2+(1-\eta_2)\theta_1]<1\). Here we used (74). Notice that \(\theta_1<\theta_2<1\). Iterating this procedure we obtain \[\Gamma_k=\Big\{x\in\Omega : d(x,\Gamma_{k-1})<\frac{\varepsilon}{2}\Big\}, \quad \mbox{and}\quad \eta_k=\sup_{x\in\Gamma_k}\frac{|B_\varepsilon(x)\cap(\Omega\backslash\Gamma_{k-1})|}{|B_\varepsilon(x)|}<1 . \tag{77}\]

Then, for \(x\in\Gamma_k\) \[(u^\varepsilon_{n+k}-u^\varepsilon_{n+k-1})(x)\le\theta_k C_n, \tag{78}\] where \(\theta_k=\alpha+(1-\alpha)[\eta_k+(1-\eta_k)\theta_{k-1}]<1\). Notice that, if \(k_0=\lceil\frac{diam(\Omega)}{\varepsilon/2}\mathbb{R}ceil\) we obtain \(\Omega\subset \Gamma_{k_0}\). Thus \[\label{k0} C_{n+k_0}\le \theta_{k_0} C_n. \tag{79}\]

Notice that \(C_{k_0}\le \theta_{k_0} C_0\) and \(C_{k_0+j}\le C_{k_0}\le \theta_{k_0} C_0\) for \(0\le j\le k_0-1\). Moreover \[C_{lk_0+j}\le \theta_{k_0}^l C_0 \quad \rightarrow \quad \sum_{j=0}^{\infty}C_{lk_0+j}\le \sum_{i=0}^{\infty}k_0\theta_{k_0}^{l+i} C_0.\]

Finally, \[\begin{array}{ll} \displaystyle \lVert u^\varepsilon_{n+m}-u^\varepsilon_n\mathbb{R}Vert_{L^\infty(\Omega)}\le \sum_{j=1}^{m}\lVert u^\varepsilon_{n+j}-u^\varepsilon_{n+j-1}\mathbb{R}Vert_{L^\infty(\Omega)}\le \sum_{j=1}^{\infty}C_{n+j} \displaystyle \le\sum_{i=1}^{\infty}k_0\theta_{k_0}^{\lfloor\frac{n}{k_0}\mathbb{R}floor+i}C_0, \end{array} \tag{80}\] and this is small if \(n\in\mathbb{N}\) is large enough. Then, \((u^\varepsilon_n)_{n\ge 0}\) is a Cauchy sequence in \(L^\infty\), and this implies that \(u_n\rightrightarrows u\) uniformly in \(\Omega\) as \(n \to \infty\). Thus, we get \[\label{supinf} \sup_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)\rightarrow \sup_{y \in B_{\varepsilon}(x)}u^\varepsilon(y) \quad \mbox{and} \quad \inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)\rightarrow \inf_{y \in B_{\varepsilon}(x)}u^\varepsilon(y). \tag{81}\] Finally, we also have \[\label{convint} {{\Huge{f}}_{ B_{\varepsilon}(x)}}u_n^\varepsilon(y)dy\rightarrow {{\Huge{f}}_{B_{\varepsilon}(x)}}u^\varepsilon(y)dy. \tag{82}\]

Taking limits in (66), and using (81) and (82) we conclude that \[\begin{aligned} \displaystyle \underbrace{u_{n+1}^{\varepsilon}(x)}_{\downarrow}=& \alpha\left[\frac{1}{2}\underbrace{\sup_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)}_{\downarrow}+\frac{1}{2}\underbrace{\inf_{y \in B_{\varepsilon}(x)}u_n^\varepsilon(y)}_{\downarrow}\right]+(1-\alpha)\underbrace{ {{\Huge{f}}_{B_{\varepsilon}(x)}}u_n^\varepsilon(y)dy}_{\downarrow}\notag\\ \displaystyle u^{\varepsilon}(x)=& \alpha\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}u^\varepsilon(y)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}u^\varepsilon(y)\right]+(1-\alpha) {{\Huge{f}}_{B_{\varepsilon}(x)}}u^\varepsilon(y)dy. \end{aligned} \tag{83}\]

This ends the proof of the lemma. ◻

Remark 4. In [26] the authors prove a statement like Lemma 4 for a different equation using similar techniques.

Proof of Theorem 2. We know that \((u_n^{\varepsilon},v_n^{\varepsilon})\) is a subsolution to the DPP (1) for all \(n\geq 0\), then, we have \[u_{n}^{\varepsilon}(x,t)\le \frac{1}{2} J_1(u_n^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u_n^{\varepsilon})(x,t-\varepsilon^2),J_2(v_n^{\varepsilon})(x,t-\varepsilon^2)\Big\}.\]

Taking limit as \(n\rightarrow\infty\) on the right side we get \[u_{n}^{\varepsilon}(x,t)\le \frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2),J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}.\]

Taking limit on the left side we arrive to \[u^{\varepsilon}(x,t)\le \frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2),J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}.\]

Analogously, we obtain \[v^{\varepsilon}(x,t)\le \frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2),J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}.\]

Thus, \((u^{\varepsilon},v^{\varepsilon})\) is a subsolution to the DPP (1).

Now, we will use ideas form the computation used in the proof of Lemma 4. Let us define \[C_n=\max\{\lVert u^\varepsilon_n-u^\varepsilon_{n-1}\lVert_{L^\infty(\Omega\times(0,T))}, \lVert v^\varepsilon_n-v^\varepsilon_{n-1}\lVert_{L^\infty(\Omega\times(0,T))}\}. \tag{84}\]

Let us start with \(u_n\): \[\begin{aligned} \displaystyle (u^\varepsilon_{n+1}-u^\varepsilon_{n})(x,t)=&\frac{1}{2} J_1(u^\varepsilon_n)(x,t-\varepsilon^2)-\frac{1}{2} J_1(u^\varepsilon_{n-1})(x,t-\varepsilon^2) +\frac{1}{2}\max\{J_1(u^\varepsilon_n)(x,t\! – \! \varepsilon^2), J_2(v^\varepsilon_n)(x,t\! -\! \varepsilon^2)\} \notag\\ & – \! \frac{1}{2}\max\{J_1(u^\varepsilon_{n-1})(x,t \! -\! \varepsilon^2), J_2(v^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2)\}\notag\\ \displaystyle \le&\frac{1}{2} C_n+\frac{1}{2}\max\{ J_1(u^\varepsilon_n)(x,t\! -\! \varepsilon^2)\! – \! J_1(u^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2), J_2(v^\varepsilon_n)(x,t\! -\! \varepsilon^2)\! – \! J_2(v^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2)\}\notag\\ \displaystyle\le& C_n. \end{aligned} \tag{85}\]

Here we used again that \(\max\{a,b\}-\max\{c,d\}\le\max\{a-c, b-d\}\). Now, for \(v^\varepsilon_n\) we have \[\begin{aligned} \displaystyle (v^\varepsilon_{n+1}-v^\varepsilon_{n})(x,t)=&\frac{1}{2} J_2(v^\varepsilon_n)(x,t-\varepsilon^2)-\frac{1}{2} J_2(v^\varepsilon_{n-1})(x,t-\varepsilon^2) +\frac{1}{2}\min\{J_1(u^\varepsilon_n)(x,t\! -\! \varepsilon^2), J_2(v^\varepsilon_n)(x,t\! -\! \varepsilon^2)\}\notag\\ &- \frac{1}{2}\min\{J_1(u^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2), J_2(v^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2)\}\notag\\ \displaystyle \le&\frac{1}{2} C_n+\frac{1}{2}\max\{ J_1(u^\varepsilon_n)(x,t\! -\! \varepsilon^2)\! -\! J_1(u^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2), J_2(v^\varepsilon_n)(x,t\! -\! \varepsilon^2)\! -\! J_2(v^\varepsilon_{n-1})(x,t\! -\! \varepsilon^2)\}\notag\\ \displaystyle\le& C_n. \end{aligned} \tag{86}\]

Here we used that \(\min\{a,b\}-\min\{c,d\}\le\max\{a-c, b-d\}\). Thus, we get \[C_{n+1}\le C_n. \tag{87}\]

Let us consider again the set \[\Gamma_1=\Big\{x\in\Omega : d(x,\partial\Omega)<\frac{\varepsilon}{2}\Big\}. \tag{88}\]

As before, using the assumed regularity on the boundary of \(\Omega\) we have that \[\eta_1=\sup_{x\in\Gamma_1}\frac{|B_\varepsilon(x)\cap\Omega|}{|B_\varepsilon(x)|}<1.\]

Given \(x\in\Gamma_1\) we get \[J_1(u^\varepsilon_n)(x,t-\varepsilon^2)- J_1(u^\varepsilon_{n-1})(x,t-\varepsilon^2)\le (\alpha_1+(1-\alpha_1)\eta_1)C_n,\] and \[J_2(v^\varepsilon_n)(x,t-\varepsilon^2)- J_2(v^\varepsilon_{n-1})(x,t-\varepsilon^2)\le (\alpha_2+(1-\alpha_2)\eta_1)C_n.\]

Thus \[(u^\varepsilon_{n+1}-u^\varepsilon_n)(x,t-\varepsilon^2)\le \theta_1 C_n \quad \mbox{and} \quad (v^\varepsilon_{n+1}-v^\varepsilon_n)(x,t-\varepsilon^2)\le \theta_1 C_n,\] whit \(\theta_1=\max\{\alpha_1+(1-\alpha_1)\eta_1, \alpha_2+(1-\alpha_2)\eta_1\}<1\). Proceeding as before, we obtain \(k_0=k_0(\Omega)\in\mathbb{N}\) and \(\theta_0<1\) such that \[C_{n+k_0}\le \theta_0 C_n.\]

Arguing as before we get the uniform convergence \(u^\varepsilon_n\rightrightarrows u^\varepsilon\) and \(v^\varepsilon_n\rightrightarrows v^\varepsilon\). Then, we get \[J_1(u^\varepsilon_n)(x,t)\rightarrow J_1(u^\varepsilon)(x,t) \quad , \quad J_2(v^\varepsilon_n)(x,t)\rightarrow J_2(v^\varepsilon)(x,t).\]

Finally, using the definition (47) and taking limit we obtain \[\begin{aligned} \begin{cases} \displaystyle \underbrace{u^\varepsilon_{n+1}(x,t)}_{\downarrow}=&\frac{1}{2} \underbrace{J_1(u^\varepsilon_n)(x,t-\varepsilon^2)}_{\downarrow}+\frac{1}{2}\max\{\underbrace{J_1(u^\varepsilon_n) (x,t-\varepsilon^2)}_{\downarrow}, \underbrace{J_2(v^\varepsilon_n)(x,t-\varepsilon^2)}_{\downarrow}\}\\ \displaystyle u^\varepsilon(x,t)=&\frac{1}{2} J_1(u^\varepsilon)(x,t-\varepsilon^2)+\frac{1}{2}\max\{J_1(u^\varepsilon)(x,t-\varepsilon^2), J_2(v^\varepsilon)(x,t-\varepsilon^2)\},\end{cases} \end{aligned} \tag{89}\] and \[\begin{aligned} \begin{cases} \displaystyle \underbrace{v^\varepsilon_{n+1}(x,t)}_{\downarrow}=&\frac{1}{2} \underbrace{J_2(v^\varepsilon_n)(x,t-\varepsilon^2)}_{\downarrow}+\frac{1}{2}\min\{\underbrace{J_1(u^\varepsilon_n) (x,t-\varepsilon^2)}_{\downarrow}, \underbrace{J_2(v^\varepsilon_n)(x,t-\varepsilon^2)}_{\downarrow}\}\\ \displaystyle v^\varepsilon(x,t)=&\frac{1}{2} J_2(v^\varepsilon)(x,t-\varepsilon^2)+\frac{1}{2}\min\{J_1(u^\varepsilon)(x,t-\varepsilon^2), J_2(v^\varepsilon)(x,t-\varepsilon^2)\}.\end{cases} \end{aligned} \tag{90}\]

This ends the proof of the theorem. ◻

Using Lemma 1 we obtain the following result that we will use in the next section.

Corollary 2. The functions \((u^\varepsilon,v^\varepsilon)\) that we obtained are uniformly bounded.

For our DPP (1) there is an alternative proof of existence of a solution based on the fact that the right hand side of the equations involves \(u\) and \(v\) evaluated at \(t-\varepsilon^2\). This proof is simpler than the previous one, but it is less flexible (for example, with this simpler proof we can not handle a parabolic/elliptic system, see the last section).

Alternative proof of existence of a solution to the DPP (1). We look for a pair \((u,v)\) that solves (1). It is clear that we need to impose the boundary conditions \[\label{BCInd.88} \left\lbrace \begin{array}{ll} u^{\varepsilon}(x,t) = f(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \\[8pt] v^{\varepsilon}(x,t)= g(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times[0,T), \end{array} \right. \tag{91}\] and the initial conditions \[\label{ICInd.99} \left\lbrace \begin{array}{ll} u^{\varepsilon}(x,0)= u_0(x), & x \in \Omega, \\[8pt] v^{\varepsilon}(x,0)= v_0(x), & x \in \Omega. \end{array} \right. \tag{92}\] Hence, we are left with determining \((u,v)\) in \(\Omega \times (0,T)\) in such a way that the equations in (1) are satisfied.

Let us start with \(t\in (0,\varepsilon^2]\). Since \(t-\varepsilon^2 \le 0\), for those times we have that \[\label{Ind.22} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)= \frac{1}{2} J_1(u_0)(x) +\frac{1}{2}\max\Big\{ J_1(u_0)(x), J_2(v_0)(x)\Big\} , & (x,t) \in \Omega\times(0,\varepsilon^2], \\[8pt] \displaystyle v^{\varepsilon}(x,t)= \frac{1}{2} J_2(v_0)(x) +\frac{1}{2}\min\Big\{ J_1(u_0)(x), J_2(v_0)(x)\Big\}, & (x,t) \in \Omega\times(0,\varepsilon^2], \end{array} \right. \tag{93}\] solves the equations in the DPP. Once we have defined \((u,v)\) in \(\Omega \times (0,\varepsilon^2]\) we look for \(t\in (\varepsilon^2, 2\varepsilon^2]\) and we get that the pair of functions given by \[\label{Ind.66} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)= \frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} , & (x,t) \in \Omega\times (\varepsilon^2,2\varepsilon^2], \\ \displaystyle v^{\varepsilon}(x,t)= \frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2) +\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\}, & (x,t) \in \Omega\times (\varepsilon^2,2\varepsilon^2], \end{array} \right. \tag{94}\] solves the DPP in \(\Omega \times (\varepsilon^2,2\varepsilon^2]\).

Iterating this procedure \([T/\varepsilon^2]\) times we obtain a pair of functions \((u,v)\) that is a solution to (1) in the whole \(\Omega \times (0,T)\). ◻

Let us prove that the solution to the DPP (1) is the value of the game defined in §3.

Theorem 3. The pair of functions \((u^{\varepsilon},v^{\varepsilon})\) that verifies the DPP (1) gives the value of the game defined in §3. This means that the function \[\Omega^{\varepsilon}(x,t,j)=\inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x,t,j)}[\mbox{total payoff}]=\sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x,t,j)}[\mbox{total payoff}], \tag{95}\] verifies that \[\Omega^{\varepsilon}(x,t,1)=u^{\varepsilon}(x,t),\] and \[\Omega^{\varepsilon}(x,t,2)=v^{\varepsilon}(x,t),\] for any pair \((u^\varepsilon,v^\varepsilon)\) that solves the DPP.

Proof. We only include a sketch of the proof. We refer to [23] (Theorem 18) where the authors proved a similar result for the elliptic case.

Fix \(\delta>0\), and take \((u^{\varepsilon},v^{\varepsilon})\) solution to the DPP (1). Assume that we start at a point in the first board, \((x_0,t_0,1)\). Then, we choose a strategy \(S_{I}^{\ast}\) for Player I using the solution to the (DPP) (1) as follows: Whenever \(j_k =1\) Player I decides to stay in the first board if \[\max \Big\{J_1(u^{\varepsilon})(x_k,t_k-\varepsilon^2), J_2(v^{\varepsilon})(x_k, t_k-\varepsilon^2)\Big\} =J_1(u^{\varepsilon})(x_k,t_k-\varepsilon^2) ,\] and in this case Player I chooses a point \[x_{k+1}^I=S_{I}^{\ast}\left(( x_k,t_k,j_k)\right) \quad \mbox{such that} \sup_{y \in B_{\varepsilon}(x_k,t_k-\varepsilon^2)}u^{\varepsilon}(y,t_k-\varepsilon^2)-\frac{\delta}{2^{k+1}}\leq u^{\varepsilon}(x_{k+1}^I,t_k-\varepsilon^2), \tag{96}\] to play the Tug-of-War game.

On the other hand, Player I decides to jump to the second board if \[\max \Big\{J_1(u^{\varepsilon})(x_k,t_k-\varepsilon^2), J_2(v^{\varepsilon})(x_k,t_k-\varepsilon^2)\Big\} =J_2(v^{\varepsilon})(x_k,t_k-\varepsilon^2) ,\] and in this case Player I chooses a point \[x_{k+1}^I=S_{I}^{\ast}\left(( x_k,t_k,j_k)\right) \quad \mbox{such that} \sup_{y \in B_{\varepsilon}(x_k)}v^{\varepsilon}(y,t_k-\varepsilon^2)-\frac{\delta}{2^{k+1}}\leq v^{\varepsilon}(x_{k+1}^I,t_k-\varepsilon^2), \tag{97}\] to play the Tug-of-War game in the second board.

Given \(S^\ast_I\) strategy for Player I, and \(S_{II}\) any strategy for Player II, we consider the sequence of random variables \[M_k=w^{\varepsilon}(x_k,t_k,j_k)-\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l-\varepsilon^2)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l-\varepsilon^2)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^k}. \tag{98}\] where \(w^{\varepsilon}(x_k,t_k,1)=u^{\varepsilon}(x_k,t_k)\), \(w^{\varepsilon}(x_k,t_k,2)=v^{\varepsilon}(x_k,t_k)\) and \[\chi_{\{j=i\}}(j)=\left\lbrace \begin{array}{ll} \displaystyle 1 & j=i ,\\[8pt] \displaystyle 0 & j\neq i. \end{array} \right. \tag{99}\]

It holds that \((M_k)_{k\geq 0}\) is a submartingale. That is \[\displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0,1)}[M_{k+1}|M_0,\dots,M_k]\ge M_k. \tag{100}\]

To prove this fact we need to consider several cases.

Suppose that \(j_k=1\) and \(j_{k+1}=1\) (that is, the token remains on the first board at the \(k\) and the \(k+1\) plays). Then

\[\begin{aligned} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0,1)}&[M_{k+1}|M_0,\dots,M_k]\notag\\ \displaystyle =&\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,1)}\left[u^{\varepsilon}(x_{k+1},t_{k+1})\! – \! \varepsilon^2\sum_{l=0}^{k}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})\! -\! h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^{k+1}}|M_0,\dots ,M_k\right]\notag\\ \displaystyle =&\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,1)}\! \left[u^{\varepsilon}(x_{k+1},t_{k+1})\! -\! \varepsilon^2h_1(x_k,t_k)\! -\! \varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})\! -\! h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)\right.\notag\\ &\left.-\frac{\delta}{2^{k+1}}|M_0,\dots ,M_k\right]\notag\\ \displaystyle =&\alpha_1\Big(\frac{1}{2} u^\varepsilon(x_{k+1}^I,t_{k+1})+\frac{1}{2} u^\varepsilon(x_{k+1}^{II},t_{k+1})\Big)+(1-\alpha_1) { {\Huge{f}}_{B_{\varepsilon}(x_k)}}u^\varepsilon(y)dy-\varepsilon^2h_1(x_k,t_k)\notag\\ &\displaystyle -\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^{k+1}}\notag\\ \displaystyle \ge&\alpha_1\Big(\frac{1}{2} \sup_{y \in B_{\varepsilon}(x_k)}u^\varepsilon(y,t_{k+1})-\frac{\delta}{2^{k+1}}+\frac{1}{2} \inf_{y \in B_{\varepsilon}(x_k)}u^\varepsilon(y,t_{k+1})\Big)+(1-\alpha_1) {{\Huge{f}}_{B_{\varepsilon}(x_k)}}u^\varepsilon(y)dy\notag\\ \displaystyle & -\varepsilon^2h_1(x_k,t_k)-\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^{k+1}}\notag\\ \displaystyle\ge& \frac{1}{2} J_1(u^\varepsilon)(x_{k},t_k-\varepsilon^2)+\frac{1}{2}\max\Big\{J_1(u^\varepsilon)(x_{k},t_k-\varepsilon^2),J_2(v^\varepsilon)(x_{k},t_k-\varepsilon^2)\Big\}\notag\\ \displaystyle & -\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^{k}}\notag\\ \displaystyle & u^\varepsilon(x_k,t_k)-\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l)\chi_{\{j=2\}}(j_{l+1})\right)-\frac{\delta}{2^{k}}=M_k. \end{aligned} \tag{101}\]

Here we used that \(j_{k+1}=1\), \(t_{k+1}=t_k-\varepsilon^2\) and \(\max\{J_1(u^\varepsilon),J_2(v^\varepsilon)\}=J_1(u^\varepsilon)\).

We omit proof of the cases, \(j_k=1\) and \(j_{k+1}=2\), \(j_k=2\) and \(j_{k+1}=1\), \(j_k=2\) and \(j_{k+1}=2\), because the computations are similar. Thus, we get that \(M_k\) is a submartingale. Using the OSTh we obtain \[\displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0,1)}[M_{\tau\wedge k}]\geq M_0 \quad \mbox{for any} \ k\in\mathbb{N}. \tag{102}\]

Taking limit as \(k\rightarrow\infty\) we get \[\displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0,1)}[M_{\tau}]\geq M_0. \tag{103}\]

If we take \(\inf_{S_{II}}\) and then \(\sup_{S_{I}}\) we arrive to \[\displaystyle \sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[M_{\tau}]\geq M_0. \tag{104}\]

This inequality says that \[\displaystyle \sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}]\geq u(x_0,t_0)-\delta. \tag{105}\]

To prove an inequality in the opposite direction we fix a strategy for Player II as follows: Whenever \(j_k=2\) Player II decides to stay in the second board if \[\min\Big\{J_1(u^{\varepsilon})(x_k,t_k-\varepsilon^2),J_2(v^{\varepsilon})(x_k,t_k-\varepsilon^2)\Big\}=J_2(v^{\varepsilon})(x_k,t_k-\varepsilon^2), \tag{106}\] and Player II decides to jump to the first board when \[\min\Big\{J_1(u^{\varepsilon})(x_k,t_k-\varepsilon^2),J_2(v^{\varepsilon})(x_k,t_k-\varepsilon^2)\Big\}=J_1(v^{\varepsilon})(x_k,t_k-\varepsilon^2). \tag{107}\]

If we play Tug-of-War (in both boards) Player II chosses \(x_{k+1}^{II}=S_{II}^{\ast}\left(( x_k,t_k,j_k)\right)\) such that \[\inf_{y \in B_{\varepsilon}(x_k,t_k-\varepsilon^2)}w^{\varepsilon}(y,t_k-\varepsilon^2,j_{k+1})+\frac{\delta}{2^{k+1}}\geq w^{\varepsilon}(x_{k+1}^{II},t_k-\varepsilon^2,j_{k+1}). \tag{108}\]

Given this strategy for Player II and any strategy for Player I, using similar computations like the ones we did before, we can prove that the sequence of random variables \[N_k=w^{\varepsilon}(x_k,t_k,j_k)-\varepsilon^2\sum_{l=0}^{k-1}\left(h_1(x_l,t_l-\varepsilon^2)\chi_{\{j=1\}}(j_{l+1})-h_2(x_l,t_l-\varepsilon^2)\chi_{\{j=2\}}(j_{l+1})\right)+\frac{\delta}{2^k}, \tag{109}\] is a supermartingale. Finally, using the OSTh we arrive to \[\inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}]\leq u^{\varepsilon}(x_0,t_0)+\delta. \tag{110}\]

Then, we have obtained \[u^{\varepsilon}(x_0,t_0)-\delta\leq \sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}]\leq \inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}]\leq u^{\varepsilon}(x_0,t_0)+\delta, \tag{111}\] for any \(\delta>0\).

Analogously, we can prove that \[v^{\varepsilon}(x_0,t_0)-\delta\leq \sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,2)}[\mbox{total payoff}]\leq \inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,2)}[\mbox{total payoff}]\leq v^{\varepsilon}(x_0,t_0)+\delta, \tag{112}\] for any \(\delta>0\). This ends the proof. ◻

Remark 5. Notice that this theorem proves that the game has a value. That is \[\sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}]= \inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,1)}[\mbox{total payoff}], \tag{113}\] and \[\sup_{S_{I}}\inf_{S_{II}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,2)}[\mbox{total payoff}]= \inf_{S_{II}}\sup_{S_{I}}\mathbb{E}_{S_{I},S_{II}}^{(x_0,t_0,2)}[\mbox{total payoff}]. \tag{114}\]

Since there exists a solution to the DPP (1), and any solution to the DPP coincide with the value of the game (that is unique), we obtain the uniqueness of solutions to the DPP. We have thus proved the existence and uniqueness of the solution to the DPP, concluding this section.

5. Convergence as \(\varepsilon\rightarrow 0\)

In this section we prove that there exists a subsequence \((u^{\varepsilon_j},v^{\varepsilon_j})\) that converges uniformly to a pair of functions \((u,v)\). To this end we will use the following Arzela-Ascoli type lemma. For its proof see Lemma 4.2 from [8].

Lemma 5. Let \[\{w^\varepsilon : \overline{\Omega}\times[0,T] \to \mathbb{R}\}_{\varepsilon>0},\] be a set of functions such that

  1. there exists \(C>0\) such that \(|w^\varepsilon (x,t)|<C\) for every \(\varepsilon >0\) and every \((x,t) \in \overline{\Omega}\times[0,T)\),

  2. given \(\delta >0\) there are constants \(r_0\) and \(\varepsilon_0\) such that for every \(\varepsilon < \varepsilon_0\), any \(x, y \in \overline{\Omega}\) with \(|x – y | < r_0\) and \(|t-s|<r_0\) it holds \[|w^\varepsilon (x,t) – w^\varepsilon (y,s)| < \delta.\]

Then, there exists a uniformly continuous function \(w: \overline{\Omega}\times[0,T) \to \mathbb{R}\) and a subsequence, still denoted by \(\{w^\varepsilon \}\), such that \[\begin{split} w^{\varepsilon}\to w \qquad\textrm{ uniformly in }\overline{\Omega}\times[0,T), \mbox{ as $\varepsilon\to 0$.} \end{split}\]

So our task now is to show that \(u^\varepsilon\) and \(v^\varepsilon\) both satisfy the hypotheses of the previous lemma. First, it is worth noting that we have already established their uniform boundedness in Corollary 2. Then, we will focus on the second hypothesis. To this end let us start with an estimate of the stopping time. It is clear that in this game played in cylinders \(\mathbb{R}^N\times[0,T)\) the game ends after a finite number of plays. In fact, the inequality \[\varepsilon^2\tau\leq T,\] holds. Nevertheless, this estimate lacks precision. In fact, it is necessary that if the game starts at \((x,t)\in\Omega\times(0,T)\) close to the parabolic boundary, there exists a strategy for any of the two players such that the game ends in a relatively small number of plays. There are two possibilities, \(t\) is small, and/or \(x\) close to \(\partial\Omega\). In the first case, \(\varepsilon^2\tau\le t\), which is small. The following Lemma provides an estimate for the expected value of the stopping time in the second case.

Let us recall the geometric condition assumed on the domain \(\Omega\): There exists \(0<\delta<R\) such that for all \(y\in\partial\Omega\) there exists \(z\in\mathbb{R}^N\) such that \(\Omega\subset B_R(z)\backslash B_\delta (z)\) and \(y\in\partial B_\delta (z)\). Without loss of generality we can soppouse that \(\Omega\subset B_R(0)\backslash B_\delta (0)\) and \(y\in\partial B_\delta (0)\cap\partial\Omega\). Under this conditions we have the following result.

Lemma 6. There exists a strategy \(S\) for Player I or \(\hat{S}\) for Player II such that if the game starts at \((x,t)\in\Omega\times(0,T)\) and \(\tau\) is the stopping time we get \[\label{acotT} \varepsilon^2\mathbb{E}_{S,\hat{S}}^{(x,t)}[\tau]\le C(\frac{R}{\delta})dist(\partial B_\delta (0),x)+o(1). \tag{115}\]

This result was proved in [9] (see Lemma 6.21) playing Tug-of-War with noise game in one board. In our case (two boards game), the player who use the strategy \(S\) decides to remains in the corresponding board, and pulls towards \(0\) if Tug-of-War game is played.

Next, we derive an estimate for the asyntopic uniform coninuity of the parabolic Tug-of-War with noise game played in one board (a cylinder) with a running payoff, related to the so called non homogeneus parabolic \(p\)-Laplacian functions.

Lemma 7. Let us consider \(h:\Omega\times[0,T)\rightarrow \mathbb{R}\), \(F:(\mathbb{R}^N\backslash\Omega)\times[0,T)\rightarrow \mathbb{R}\) and \(\mu_0:\Omega\rightarrow\mathbb{R}\) three lipschitz functions. For \(0<\beta<1\) let \(\mu^{\varepsilon}:\mathbb{R}^N\rightarrow \mathbb{R}\) be a function that solves the following DPP \[\left\lbrace \begin{array}{ll} \displaystyle \mu^{\varepsilon}(x,t)=\beta\left[\frac{1}{2}\sup_{y \in B_{\varepsilon}(x)}\mu^{\varepsilon}(y,t-\varepsilon^2)+\frac{1}{2}\inf_{y \in B_{\varepsilon}(x)}\mu^{\varepsilon}(y,t-\varepsilon^2)\right]\\[8pt] \displaystyle\qquad \qquad +(1-\beta) {{\Huge{f}}_{B_{\varepsilon}(x)}}\mu^{\varepsilon}(y,t-\varepsilon^2)dy+\varepsilon^2 h(x,t-\varepsilon^2), & (x,t)\in\Omega\times(0,T), \\[8pt] \displaystyle \mu^{\varepsilon}(x,t)=F(x,t), & x\in(\mathbb{R}^N\backslash\Omega)\times[0,T),\\[8pt] \displaystyle \mu^\varepsilon(x,0)=\mu_0(x),& x\in\Omega. \end{array} \right. \tag{116}\]

Then, given \(\eta>0\) there exists \(r_0>0\) and \(\varepsilon_0>0\) such that \[|\mu^{\varepsilon}(x,t)-\mu^{\varepsilon}(y,s)|<\eta, \tag{117}\] if \(|x-y|<r_0\), \(|t-s|<r_0\) and \(\varepsilon<\varepsilon_0\).

Proof. Let us start with the following definition: Let \(w:[(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})]\rightarrow\mathbb{R}\) be given by, \[\label{funcionw} w(x,t) =\left\lbrace \begin{array}{ll} F(x,t) & \ \ \mbox{if} \quad x\notin\Omega, t\geq 0, \\[8pt] \displaystyle \mu_0(x) & \ \ \mbox{if} \quad x\in\Omega, t= 0. \end{array} \right. \tag{118}\]

From our conditions on the data, the function \(w\) is well defined and is Lipschitz in both variables, that is \[\label{Lw1} |w(x,t)-w(y,s)|\leq L(|x-y|+|t-s|). \tag{119}\]

Let us proceed with the proof of the lemma.

Case 1. If \((x,t), (y,s)\in\left(\mathbb{R}^N\backslash\Omega\times [0,T)\right)\cup\left(\Omega\times\{0\}\right)\) we have \[|\mu^{\varepsilon}(x,t)-\mu^{\varepsilon}(y,s)|=|w(x,t)-w(y,s)|\leq L(|x-y|+|t-s|)<\eta, \tag{120}\] if \(r_0<\frac{\eta}{2L}\).

Case 2. Suppose now that \((x,t)\in\Omega\times(0,T)\) and \((y,s)\in\partial\Omega\times[0,T)\). Without loss of generality we can suppose that \(\Omega\subset B_R(0)\backslash B_{\delta}(0)\) and \(y\in\partial B_{\delta}(0)\). Let us call \((x_0,t_0)=(x,t)\) the first position in the game. Assume that Player I uses the strategy of pulling towards \(0\), denoted by \(S_{I}^{\ast}\). That is, for \(x_k\neq 0\) \[x_{k+1}^I=S^\ast_I(x_0,\dots,x_k)=x_k-\varepsilon\frac{x_k}{|x_k|}.\]

Let us consider the sequence of random variables using \(S_I^\ast\) for Player I and any \(S_{II}\) for Player II, \[M_k=|x_k|-C\varepsilon^2 k. \tag{121}\]

If \(C>0\) is large enough \(M_k\) is a supermartingale. Indeed \[\begin{array}{ll} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[|x_{k+1}| |x_0,\dots x_k]\leq \beta\left[\frac{1}{2}(|x_k|+\varepsilon)+\frac{1}{2}(|x_k|-\varepsilon)\right]+(1-\beta) {{\Huge{f}}_{B_{\varepsilon}(x_k)}}|z|dz \leq |x_k|+C\varepsilon^2. \end{array} \tag{122}\]

The first inequality follows form the choice of the strategy, and the second from the estimate \[ {{\Huge{f}}_{B_{\varepsilon}(x)}}|z|dz\leq |x|+C\varepsilon^2. \tag{123}\]

Using the OSTh we obtain \[\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[|x_{\tau}|]\leq |x_0|+C\varepsilon^2\mathbb{E}_{S_{I}^{\ast},S_{II}}[\tau]. \tag{124}\]

Now, we use the following estimate (115) to get \[\begin{array}{ll} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[|x_{\tau}|]\leq |x_0|+C(\frac{R}{\delta})dist(\partial B_\delta (0),x_0)+o(1) \displaystyle \le \delta+C|x_0-y|+o(1). \end{array} \tag{125}\]

Using the continuity property for \(w\) (119) we have \[|w(x_{\tau},t_\tau)-w(0,s)|\leq L\left(|x_{\tau}|+|t_\tau -s|\right). \tag{126}\]

But, \(t_\tau=t_0-\varepsilon^2\tau\). Hence, we get \[\begin{aligned} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_{\tau},t_\tau)]\geq& w(0,s)-L\left[\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[|x_{\tau}|]+|t_0-s|+\varepsilon^2\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[\tau]\right]\notag\\ \displaystyle \geq & w(y,s)-L\delta-L\left[2(\delta+C|x_0-y|)+|t_0-s|+ o(1)\right]\notag\\ \displaystyle \geq & w(y,s)-L\left[3\delta+2Cr_0+o(1)\right]. \end{aligned} \tag{127}\]

Then \[\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_{\tau},t_\tau)+\varepsilon^2\sum_{j=0}^{\tau -1}h(x_j,t_j)]\geq w(y,s)-L\left[3\delta+2Cr_0\right]- \lVert h\mathbb{R}Vert_{\infty} C r_0 – o(1). \tag{128}\]

Thus, taking \(\inf_{S_{II}}\), and then \(\sup_{S_{I}}\) we obtain \[\mu^{\varepsilon}(x_0,t_0)\ge w(y,s)-L\left[3\delta+2Cr_0\right]- \lVert h\mathbb{R}Vert_{\infty} C r_0 – o(1)>w(y,s)-\eta. \tag{129}\]

We take \(\delta>0\) such that \(3L\delta<\frac{\eta}{3}\), then take \(r_0>0\) such that \(\left(2LC+\lVert h\mathbb{R}Vert_{\infty}C\right)r_0<\frac{\eta}{3}\) and then \(\varepsilon\) small such that \(o(1)<\frac{\eta}{3}\). Analogously, we can obtain the estimate \[\mu^{\varepsilon}(x_0,t_0)<w(y,s)+\eta, \tag{130}\] if player II use the strategy that pull towards \(0\). This ends the proof in this case.

Case 3. Suppose now that \((x,t)\in\Omega\times(0,T)\), \(y\in\overline{\Omega}\) and \(s=0\). Now we consider the \(S_I^\ast\) strategy for Player I pulling towards \(y\). That is \[x_{k+1}=S_I^\ast(x_0,\dots,x_k)=x_k-\varepsilon\frac{y-x_k}{|y-x_k|}, \tag{131}\] if \(|y-x_k|\ge \varepsilon\) and \(x_{k+1}=y\) in other case. Suppose that \(0<t=t_0<r_0\) for \(r_0\) small (to be chosen latter). Then, the stopping time is bounded. In fact, \(\tau\le \lceil\frac{r_0}{\varepsilon^2}\mathbb{R}ceil\) with probability one. Let us call \(M=\lceil\frac{r_0}{\varepsilon^2}\mathbb{R}ceil\). Now we will prove the following claim:
Claim. Given \(\theta>0\) and \(a>0\), there exists \(r_0>0\) and \(\varepsilon_0>0\) such that if Player I use \(S_I^\ast\) the strategy defined before, and Player II use any strategy \(S_{II}\), we get \[¶(\tau \geq \frac{a}{\varepsilon^2})<\theta \quad \mbox{ and } \quad ¶(|x_{\tau}-y|\ge a)<\theta. \tag{132}\] Proof of the claim. The first inequality holds if \(r_0<a\). To obtain the other inequality let us define the following sequence of random variables. \[X_{k}= \left\{ \begin{array}{ll} 1 \quad & \quad \mbox{if Player II wins,} \\[8pt] -1 \quad & \quad \mbox{if Player I wins,} \end{array} \right.\] for \(k\geq 1\), and \[Z_k=\sum_{j=1}^{k}X_k.\]

Observe that \(X_k\) are independent with \(\mathbb{E}[X_k]=0\) and \(\mathbb{V}[X_{k}]=1\). Then, \(\mathbb{E}[Z_k]=0\) and \(\mathbb{V}[Z_k]=k\). If we use Chebyshev’s Theorem we obtain \[¶(|Z_M|\geq \frac{a}{2\varepsilon})\leq \frac{\mathbb{V}[Z_M]}{(\frac{a}{2\varepsilon})^2}=\frac{4M\varepsilon^2}{a^2}\leq \frac{(\frac{4r_0}{\varepsilon^2}+1)\varepsilon^2}{a^2}\leq\frac{4r_0}{a^2}+\frac{\varepsilon^2}{a^2}<\theta,\] if \(\frac{4r_0}{a^2}<\frac{\theta}{2}\) and \(\frac{\varepsilon^2}{a^2}<\frac{\theta}{2}\). This says that the probability that Player II wins \(\frac{a}{2\varepsilon}\) more times than Player I is small. Then, if we take \(r_0<\frac{a}{2}\), we deduce that \[¶(|x_{\tau}-x_0|\geq \frac{a}{2})<\theta.\]

Here we use that the maximum distance that the position of the token can get away from \(x_0\) is \(\varepsilon\) at each step. Now, let us consider \[|x_{\tau}-y|\leq |x_{\tau}-x_0|+|x_0-y|<|x_{\tau}-x_0|+\frac{a}{2}.\]

Hence, we have \[\Big\{|x_{\tau}-y|\geq a \Big\}\subseteq \Big\{|x_{\tau}-x_0|\geq \frac{a}{2}\Big\},\] and then we conclude that \[¶({|x_{\tau}-y|\geq a})<\theta.\]

This ends the proof of the claim.

Using the definion (118) and the fact that \(s= 0\), we get \[|w(x_\tau,t_\tau)-\mu_0(y)|=|w(x_\tau,t_\tau)-w(y,s)|\le L\left(|x_\tau-y|+|t_\tau|\right)\le L\left(|x_\tau-y|+r_0\right).\]

Let us define \(A=\big\{|x_{\tau}-y|\geq a \big\}\), Using the claim, we obtain \[\begin{aligned} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_\tau,t_\tau)]=& \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_\tau,t_\tau)| A^c]¶(A^c)+ \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_\tau,t_\tau) | A]¶(A)\notag\\ \displaystyle \ge&\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_\tau,t_\tau) | A^c](1-\theta)-\lVert w\mathbb{R}Vert_{\infty}\theta\ge \mu_0(y)(1-\theta)-L\left(a+r_0\right)(1-\theta)-\lVert w\mathbb{R}Vert_{\infty}\theta. \end{aligned} \tag{133}\]

Adding the runing payoff we get \[\begin{aligned} \displaystyle \mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[w(x_\tau,t_\tau)+\varepsilon^2\sum_{j=0}^{\tau-1}h(x_j,t_j)] \displaystyle \ge&\mu_0(y)(1-\theta)-L\left(a+r_0\right)-\lVert w\mathbb{R}Vert_{\infty}\theta-\lVert h\mathbb{R}Vert_{\infty}\varepsilon^2\mathbb{E}_{S_{I}^{\ast},S_{II}}^{(x_0,t_0)}[\tau]\notag\\ \displaystyle\ge&\mu_0(y)(1-\theta)-L\left(a+r_0\right)-\lVert w\mathbb{R}Vert_{\infty}\theta-\lVert h\mathbb{R}Vert_{\infty}Cr_0+o(1). \end{aligned} \tag{134}\]

Thus, taking infimum over all possible strategies \(S_{II}\), and then supremum over \(S_I\) we get \[\mu^\varepsilon(x_0,t_0)\ge \mu_0(y)(1-\theta)-L(a+r_0)-\lVert w\mathbb{R}Vert_{\infty}\theta-\lVert h\mathbb{R}Vert_{\infty}Cr_0+o(1)>\mu_0(y)-\eta, \tag{135}\] if \(a>0\), \(\theta>0\), \(r_0>0\) and \(\varepsilon>0\) are small enough.

Analogously, we obtain \[\mu^\varepsilon(x,t)\le \mu_0(y)+\eta. \tag{136}\]

In this case, we use the strategy \(S_{II}^\ast\) pulling towards \(0\).

Case 4. Now, given two points \((x,t), (y,s)\in\Omega\times(0,T)\) with \(|x-y|<r_0\), and \(|t-s|<r_0\), we couple the game starting at \(x_0=x\) and \(t_0=t\), with the game starting at \(y_0=y\) and \(s_0=s\) making the same movements. This means that \(x_{k+1}-y_{k+1}=x_{k}-y_k\) for \(k\ge 0\) (it is clear that \(t_{k+1}-s_{k+1}=t_k-s_k\)). We can think the two games position mimic each other. This coupling generates two sequences of positions \(x_i\) and \(y_i\) such that \(|x_i – y_i|<r_0\) and \(j_i=k_i\). It is clear that \(t_i=t-\varepsilon^2i\) and \(s_i=s-\varepsilon^2i\), then \(|t_i-s_i|<r_0\). This continues until one of the game ends. Here we have two possibilities:

– If the game ends leaving the domain \(\Omega\) (say, for example \(y_\tau\notin\Omega\)) . At this point for the game starting at \((x_0,t_0)\) we arrived to the position \((x_\tau,t_\tau)\), with \(x_\tau\) close to the exterior point \(y_\tau \not\in \Omega\) (since we have \(|x_\tau – y_\tau|<r_0\)) and hence we can use our previous estimates for points close to the boundary to conclude that \[|\mu^{\varepsilon}(x_0,t_0)- \mu^\varepsilon (y_0,s_0)|< \eta.\]

– If the game ends leaving the doman from the bottom (say \(s_\tau= 0\) ), we have that \(t_\tau\le r_0\), then we can use the estimate obtained in case 3 to conclude that \[|\mu^{\varepsilon}(x_0,t_0)- \mu^\varepsilon (y_0,s_0)|< \eta.\]

This ends the proof. ◻

Remark 6. For the proof of Lemma 7 we strongly emphasize that the compatibility assumption on the boundary conditions and the initial data is necessary. In fact, suppose that the game starts at \((x_0, t_0) \in \Omega \times (0, T)\), where \(x_0\) is near the boundary \(\partial \Omega\) and \(t_0\) is close to \(0\). Then, the final payoff should be similar whether leaving the parabolic domain from the bottom or from the sides.

Now we are ready to prove the second condition of the Arzela-Ascoli type result, Lemma 5.

Lemma 8. Let \((u^{\varepsilon},v^{\varepsilon})\) be a pair of functions that is a solution to the (DPP) (1) given by \[\left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)=\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} & (x,t) \in \Omega\times (0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t)=\frac{1}{2} J_2(v^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} & (x,t) \in \Omega\times (0,T), \end{array} \right. \tag{137}\] with boundary conditions \[\left\lbrace \begin{array}{ll} u^{\varepsilon}(x,t) = f(x,t) & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \\[8pt] v^{\varepsilon}(x,t) = g(x,t) & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \end{array} \right. \tag{138}\] and initial conditions \[\left\lbrace \begin{array}{ll} u^\varepsilon(x,0)=u_0(x) & x\in \Omega,\\[8pt] v^\varepsilon(x,0)=v_0(x) & x\in \Omega. \end{array} \right. \tag{139}\] Given \(\eta>0\), there exists \(r_0>0\) and \(\varepsilon_0>0\) such that \[|u^{\varepsilon}(x,t)-u^{\varepsilon}(y,s)|<\eta \qquad \mbox{and} \qquad |v^{\varepsilon}(x,t)-v^{\varepsilon}(y,s)|<\eta, \tag{140}\] if \(|x-y|<r_0\), \(|t-s|<r_0\) and \(\varepsilon<\varepsilon_0\).

Proof. We will proceed using ideas similar to the ones used in Lemma 7. We start again with the following definition: Let us consider \(w_1:(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})\rightarrow\mathbb{R}\), \[w_1(x,t) =\left\lbrace \begin{array}{ll} f(x,t) & \ \ \mbox{if} \quad x\notin\Omega, t\geq 0, \\[8pt] \displaystyle u_0(x) & \ \ \mbox{if} \quad x\in\Omega, t= 0, \end{array} \right. \tag{141}\] and \(w_2:(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})\rightarrow\mathbb{R}\), \[w_2(x,t) =\left\lbrace \begin{array}{ll} g(x,t) & \ \ \mbox{if} \quad x\notin\Omega, t\geq 0, \\[8pt] \displaystyle v_0(x) & \ \ \mbox{if} \quad x\in\Omega, t= 0. \end{array} \right. \tag{142}\]

It is clear that \(w_1(x,t)\ge w_2(x,t)\). Also from the conditions on the data we have that \[\label{Lw.44} |w_i(x,t)-w_i(y,s)|\leq L(|x-y|+|t-s|), \tag{143}\] for \(i=1,2\).

Let us proceed with the proof of the lemma. We consider two cases.

Case 1. Suppose that \((x,t), (y,s)\in[(\mathbb{R}^N\backslash\Omega\times [0,T))\cup(\Omega\times\{0\})]\), then we have \[|u^{\varepsilon}(x,t)-u^{\varepsilon}(y,s)|=|w_1(x,t)-w_1(y,s)|\leq L\left(|x-y|+|t-s|\right)<\eta,\] and \[|v^{\varepsilon}(x,t)-v^{\varepsilon}(y,s)|=|w_2(x,t)-w_2(y,s)|\leq L\left(|x-y|+|t-s|\right)<\eta,\] if \(2Lr_0<\eta\).

Case 2. Let us begin with the estimate of \(u^{\varepsilon}\). Suppose now that \((x,t)\in\Omega\times(0,T)\) and \((y,s)\in\partial\Omega\times(0,T)\) in the first board (we denote \((x,t,1)\) and \((y,s,1)\)). Without loss of generality we suppose again that \(\Omega\subset B_R(0)\backslash B_{\delta}(0)\) and \(y\in\partial B_{\delta}(0)\). Let us call \(x_0=x\) the first position in the game. Player I uses the following strategy called \(S_{I}^{\ast}\): the token always stay in the first board (Player I decides not to change boards), and pulls towards \(0\) when Tug-of-War is played. In this case we have that \(u^\varepsilon\) is a supersolution to the DPP that appears in Lemma 7 (with \(\beta=\alpha_1\)). Notice that the game is always played in the first board. As Player I wants to maximize the expected value we get that the first component for our system, \(u^\varepsilon\), satisfies \[u^\varepsilon(x,t)\ge \mu^\varepsilon(x,t), \tag{144}\] (the value function when the player that wants to maximize is allowed to choose to change boards is bigger than or equal to the value function of a game where the player does not have the possibility of making this choice). From this bound and Lemma 7, a lower bound for \(u^\varepsilon\) close to the boundary follows. That is, from the estimate obtained in that lemma, we get \[u^\varepsilon(x,t)\ge w_1(y,s)-\eta, \tag{145}\] if \(|x-y|<r_0\), \(|t-s|<r_0\) and \(\varepsilon<\varepsilon_0\) for some \(r_0\) and \(\varepsilon_0\).

Now, the next estimate requires a particular strategy for Player II, called \(S_{II}^\ast\): when play the Tug-of-War game, Player II pulls towards 0 (in both boards) and if in some step Player I decides to jump to the second board, then Player II decides to stay always in this board and then the position never comes back to the first board. Using that \(w_1\ge w_2\) we will repeat the ideas used in Lemma 7: Suppose that \(j_{\tau}=1\). This means that \(j_k=1\) for all \(0\leq k \leq \tau\). Then we obtain \[\mathbb{E}_{S_{I},S_{II}^{\ast}}^{(x,t,1)}[\mbox{final payoff}]\leq w_1(y,s)+\eta, \tag{146}\] for \(r_0\) and \(\varepsilon_0\) small enough. On the other hand, if \(j_{\tau}=2\), we have \[\mathbb{E}_{S_{I},S_{II}^{\ast}}^{(x,t,1)}[w_2(x_{\tau},t_\tau)]\leq w_2(y,s)+\eta\leq w_1(y,s)+\eta. \tag{147}\]

In both cases, taking \(\sup_{S_{I}}\) and then \(\inf_{S_{II}}\) we arrive to \[u^\varepsilon(x,t)\le w_1(y,s)+\eta, \tag{148}\] taking \(\delta>0\), \(r_0>0\) and \(\varepsilon>0\) small enough.

Case 3. Now, given two points \((x,t,j), (y,s,l)\in\Omega\times(0,T)\times\{1,2\}\) with \(j=l\) (that is, both position are in the same board). Also we assume \(|x-y|<r_0\), and \(|t-s|<r_0\). Then, we couple the game starting at \((x_0,t_0,j_0)=(x,t,j)\), with the game starting at \((y_0,s_0,l_0)=(y,s,l)\) making the same movements, and changing boards at the same time. This means that, \(j_k=l_k\), and \(x_{k+1}-y_{k+1}=x_{k}-y_k\) for \(k\ge 0\) (it is clear that \(t_{k+1}-s_{k+1}=t_k-s_k\)). We can think the two games position mimic each other. This coupling generates two sequences of positions \(x_i\) and \(y_i\) such that \(|x_i – y_i|<r_0\) and \(j_i=k_i\). It is clear that \(t_i=t-\varepsilon^2i\) and \(s_i=s-\varepsilon^2i\), then \(|t_i-s_i|<r_0\). Using the same computation as in Lemma 7 we get \[|u^{\varepsilon}(x,t)-u^{\varepsilon}(y,s)|<\eta, \tag{149}\] if \(r_0>0\) and \(\varepsilon>0\) are small enough.

Analogously we can obtain the estimates for \(v^{\varepsilon}\) and complete the proof. ◻

As a corollary we obtain the following result.

Theorem 4. Given \((u^\varepsilon,v^\varepsilon)_\varepsilon\) solutions to the DPP (1), there exists a sequence \(\varepsilon_j \to 0\) such that \[u^{\varepsilon_j}\rightrightarrows u, \qquad v^{\varepsilon_j}\rightrightarrows v,\] uniformly in \(\overline\Omega\times[0,T)\) and the limit functions \((u,v)\) are continuous in \(\overline\Omega\times[0,T)\).

6. The limit is a viscosity solution to the PDE system

In this section we will prove the following theorem.

Theorem 5. Let \((u,v)\) be continuous functions that are a uniform limit of a sequence of values of the game, that is, \[u^{\varepsilon_j}\rightrightarrows u, \qquad v^{\varepsilon_j}\rightrightarrows v,\] uniformly in \(\overline{\Omega}\times [0,T)\) as \(\varepsilon_j \to 0\). Then, the limit pair \((u,v)\) is a viscosity solution to (19) in the sense of Definition 2.

Proof. We divide the proof in several cases.

1) \(u\) and \(v\) are ordered: From the fact that \[u^{\varepsilon_j} \geq v^{\varepsilon_j},\] in \(\mathbb{R}^N\times[0,T)\) we get \[u \geq v,\] in \(\overline{\Omega}\times[0,T)\).

2) The lateral boundary conditions: As we have that \[u^{\varepsilon_j} = f, \qquad v^{\varepsilon_j} = g,\] in \(\mathbb{R}^N \setminus \Omega\times [0,T)\) we get \[u|_{\partial \Omega\times [0,T)} = f, \qquad v|_{\partial \Omega\times [0,T)} = g.\]

(3) The initial conditions: As we have that \[u^{\varepsilon_j}(x,0) = u_0(x), \qquad v^{\varepsilon_j}(x,0) = v_0(x),\] we obtain \[u(x,0) = u_0(x), \qquad v(x,0) = v_0(x).\]

(4) The equation for \(u\): First, let us show that \(u\) is a viscosity supersolution to \[\label{Supu} \frac{\partial u}{\partial t}(x,t)-\Delta_p^1 u(x,t)= h_1(x,t), \tag{150}\] for \((x,t)\in \Omega\times (0,T)\). To this end, consider a point \((x_0,t_0)\in \Omega\times (0,T)\) and a smooth function \(\varphi\in C^{2,1}(\Omega\times(0,T))\) such that \((u-\varphi)(x_0,t_0)=0\) is a strict minimum of \((u-\varphi)\). Then, from the uniform convergence there exists a sequence of points, that we will denote by \(\{(x_{\varepsilon},t_{\varepsilon})\}_{\varepsilon>0}\), such that \(x_{\varepsilon}\rightarrow x_0\) and \(t_{\varepsilon}\rightarrow t_0\), and it holds \[(u^{\varepsilon}-\varphi)(x_{\varepsilon},t_{\varepsilon})\leq(u^{\varepsilon}-\varphi)(y,s)+o(\varepsilon^2), \tag{151}\] for all \((y,s)\in\Omega\times[0,T)\), that is, \[\label{ump} u^{\varepsilon}(y,s)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\geq\varphi(y,s)-\varphi(x_{\varepsilon},t_{\varepsilon})-o(\varepsilon^2). \tag{152}\]

From the DPP (1) we have \[\begin{aligned} \label{desDPP} \displaystyle 0=&\frac{1}{2} J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u(x_{\varepsilon},t_{\varepsilon}), J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon}) \Big\}\notag\\ \geq & J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon}). \end{aligned} \tag{153}\]

Using (152) we get \[0\ge J_1(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})-o(\varepsilon^2), \tag{154}\] if we add and subtract \(\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)\) and we obtain \[\label{J1phi} 0\ge \varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})+J_1(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-o(\varepsilon^2). \tag{155}\]

Consider \[\begin{aligned} \label{J1conphi} \displaystyle J_1(\varphi)&(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2) \notag\\[8pt] \displaystyle =& \alpha_1\underbrace{\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x_{\varepsilon})}(\varphi(y,t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x_{\varepsilon})}(\varphi(y,t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))\right]}_{I} \notag\\[8pt] & +(1-\alpha_1)\underbrace{ {{\Huge{f}}_{B_{\varepsilon}(x_{\varepsilon})}}(\varphi(y,t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))dy}_{II}+\varepsilon^2h_1(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2). \end{aligned} \tag{156}\]

Let us analyze \(I\) and \(II\). We begin with \(I\): Assume that \(\nabla\varphi(x_0,t_0)\neq 0\). Let define \(z_{\varepsilon}=\frac{\nabla\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)}{|\nabla\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)|}\neq 0\). If \(\varepsilon>0\) is small enough, it holds that \[\sup_{y \in B_{\varepsilon}(x_{\varepsilon})}\varphi(y,t_{\varepsilon}-\varepsilon^2)\sim\varphi(x_{\varepsilon}+\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2) \quad \mbox{and} \quad \inf_{y \in B_{\varepsilon}(x_{\varepsilon})}\varphi(y,t_{\varepsilon}-\varepsilon^2)\sim\varphi(x_{\varepsilon}-\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2).\]

Then, we have \[I\sim\frac{1}{2}(\varphi(x_{\varepsilon}+\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))+\frac{1}{2}(\varphi(x_{\varepsilon}-\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)). \tag{157}\]

From a simple Taylor expansion we conclude that \[\begin{aligned} \displaystyle\frac{1}{2}&(\varphi(x_{\varepsilon}+\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))+\frac{1}{2}(\varphi(x_{\varepsilon}-\varepsilon z_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))\notag\\ \displaystyle&=\frac{1}{2}\varepsilon^2\langle D^2\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2 )z_{\varepsilon},z_{\varepsilon}\mathbb{R}angle +o({\varepsilon^2}). \end{aligned} \tag{158}\]

Dividing by \(\varepsilon^2\) the first inequality and taking the limit as \(\varepsilon\rightarrow 0\) we obtain \[\label{limI} \frac{1}{2}\langle D^2\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2 )z_{\varepsilon},z_{\varepsilon}\mathbb{R}angle\rightarrow \frac{1}{2}\langle D^2\varphi(x_0,t_0 )z_0,z_0\mathbb{R}angle, \tag{159}\] where \(z_0=\frac{\nabla\varphi(x_0,t_0)}{|\nabla\varphi(x_0,t_0)|}\). Thus \[I\rightarrow\frac{1}{2}\Delta^1_{\infty}\varphi(x_0,t_0).\]

See [17]) for more details. When \(\nabla\varphi=0\) arguing again using Taylor’s expansions we get \[\limsup_{\varepsilon\to 0} I \geq \frac{1}{2} \lambda_1 (D^2 \varphi (x_0,t_0)).\]

See [9] for the details.

Now, we look at \(II\): Using again Taylor’s expansions we obtain \[{ {\Huge{f}}_{B_{\varepsilon}(x_{\varepsilon})}}(\varphi(y,t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2))dy=\frac{\varepsilon^2}{2(N+2)}\Delta \varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)+o(\varepsilon^2).\]

Dividing by \(\varepsilon^2\) and taking limits as \(\varepsilon \rightarrow 0\) we get \[\label{limII} \displaystyle II\rightarrow \frac{1}{2(N+2)}\Delta \varphi(x_{0},t_0). \tag{160}\]

Therefore, if we come back to (155), dividing by \(\varepsilon^2\) and taking limit \(\varepsilon\rightarrow 0\) we obtain \[0\geq -\frac{\partial\varphi}{\partial t}(x_0,t_0)+\frac{\alpha_1}{2}\Delta^1_{\infty}\varphi(x_0,t_0)+\frac{1-\alpha_1}{2(N+2)}\Delta\varphi(x_0,t_0)+h_1(x_0,t_0), \tag{161}\] when \(\nabla \varphi (x_0,t_0) \neq 0\) and \[0\geq -\frac{\partial\varphi}{\partial t}(x_0,t_0)+\frac{\alpha_1}{2}\lambda_1(D^2\varphi(x_0,t_0))+\frac{1-\alpha_1}{2(N+2)}\Delta\varphi(x_0,t_0)+h_1(x_0,t_0), \tag{162}\] when \(\nabla \varphi (x_0,t_0) = 0\).

Using the definition of the normalized \(p-\)Laplacian we have arrived to \[\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_p^1\varphi(x_0,t_0)\geq h_1(x_0,t_0). \tag{163}\]

Thus we proved that \(u\) is a viscosity supersolution of (150).

Now we are going to prove that \(u\) is viscosity solution to \[\label{parmonica} \frac{\partial u}{\partial t}(x,t)-\Delta_p^1 u(x,t)= h_1(x,t), \tag{164}\] in the set \(\left(\Omega\times (0,T)\right)\cap\{ u>v \}\). Let us consider \((x_0,t_0)\in\left(\Omega\times (0,T)\right)\cap\{ u>v \}\). Let \(\eta>0\) be such that \[u(x_0,t_0)\geq v(x_0,t_0)+3\eta. \tag{165}\]

Then, using that \(u\) and \(v\) are continuous functions, there exists \(\delta>0\) such that \[u(y,t)\geq v(y,t)+2\eta \quad \mbox{for all} \quad (y,t)\in B_{\delta}(x_0)\times(t_0-\delta,t_0+\delta), \tag{166}\] and, using that \(u^{\varepsilon}\rightrightarrows u\) and \(v^{\varepsilon}\rightrightarrows v\) we have \[u^{\varepsilon}(y,t)\geq v^{\varepsilon}(y,t)+\eta \quad \mbox{for all} \quad (y,t)\in B_{\delta}(x_0)\times(t_0-\delta,t_0+\delta) , \tag{167}\] for \(0<\varepsilon<\varepsilon_0\) for some \(\varepsilon_0>0\). Given \((z,t)\in B_{\frac{\delta}{2}}(x_0)\times(t_0-\frac{\delta}{2},t_0+\frac{\delta}{2})\) and \(\varepsilon<\min\{\varepsilon_0,\frac{\delta}{2}\}\) we obtain \[B_{\varepsilon}(z)\times\{t-\varepsilon^2\}\subset B_{\delta}(x_0)\times(t_0-\frac{\delta}{2},t_0+\frac{\delta}{2}). \tag{168}\]

Using that \(u^{\varepsilon}\rightrightarrows u\) we have the following limits: \[\label{Claim1} \displaystyle \sup_{y \in B_{\varepsilon}(z)}u^{\varepsilon}(y,t-\varepsilon^2)\rightarrow u(z,t), \qquad \mbox{ as } \varepsilon\rightarrow 0. \tag{169}\]

In fact, from our previous estimates we have that \[\Big|\sup_{y \in B_{\varepsilon}(z)}u^{\varepsilon}(y,t-\varepsilon^2)-u(z,t) \Big|\leq \sup_{y \in B_{\varepsilon}(z)}|u^{\varepsilon}(y,t-\varepsilon^2)-u(y,t-\varepsilon^2)|+ \sup_{y \in B_{\varepsilon}(z)}|u(y,t-\varepsilon^2)-u(z,t)|. \tag{170}\]

Using that \(u^{\varepsilon}\rightrightarrows u\), there exists \(\varepsilon_1>0\) such that if \(\varepsilon<\varepsilon_1\) \[|(u^{\varepsilon}-u)(x,t)|<\frac{\theta}{2} \quad \mbox{for all} \quad (x,t)\in\Omega\times(0,T]. \tag{171}\]

Now, using that \(u\) is continuous, there exists \(\varepsilon_2>0\) such that \[|u(y,t)-u(z,s)|<\frac{\theta}{2} \quad \mbox{if} \quad |(y,t)-(z,s)|<\varepsilon_2, \tag{172}\] thus, if we take \(\varepsilon<\frac{1}{2}\min\{\varepsilon_0,\varepsilon_1,\varepsilon_2,\frac{\delta}{2} \}\) we obtain \[\Big|\sup_{y \in B_{\varepsilon}(z)}u^{\varepsilon}(y,t-\varepsilon^2)-u(z,t)\Big|<\theta. \tag{173}\]

This proves (169).

Also, with a similar argument, we get, \[\label{Claim2} \displaystyle \lim_{\varepsilon\to 0}\inf_{y \in B_{\varepsilon}(z)}u^{\varepsilon}(y,t-\varepsilon^2)= u(z,t). \tag{174}\]

Finally, we also have, \[\label{Claim3} \displaystyle \lim_{\varepsilon\to 0} {{\Huge{f}}_{B_{\varepsilon}(z)}}u^{\varepsilon}(y,t-\varepsilon^2)dy= u(z,t). \tag{175}\]

In fact, let us compute \[\begin{array}{ll} \displaystyle\left| {{\Huge{f}}_{B_{\varepsilon}(z)}}u^{\varepsilon}(y,t-\varepsilon^2)dy-u(z,t)\right| \leq {{\Huge{f}}_{B_{\varepsilon}(z)}} \left| u^{\varepsilon}(y,t-\varepsilon^2)-u(y,t-\varepsilon^2) \right| dy+ {{\Huge{f}}_{B_{\varepsilon}(z)}} \left| u(y,t-\varepsilon^2)-u(z,t)\right| dy. \end{array} \tag{176}\]

Now we use again that \(u^{\varepsilon}\rightrightarrows u\) and that \(u\) is a continuous function to obtain \[ {{\Huge{f}}_{B_{\varepsilon}(z)}}|u^{\varepsilon}(y,s)-u(y,s)|dy<\frac{\theta}{2} \qquad \mbox{ and }\qquad {{\Huge{f}}_{B_{\varepsilon}(z)}}|u(y,t-\varepsilon^2)-u(z,t)|dz<\frac{\theta}{2},\] for \(\varepsilon>0\) small enough. Thus we get \[\left| {{\Huge{f}}_{B_{\varepsilon}(z)}}u^{\varepsilon}(y,t-\varepsilon^2)dy-u(z,t) \right|<\theta. \tag{177}\]

Using the previous limits, (169), (174) and (175) we obtain \[J_1(u^{\varepsilon})(z,t-\varepsilon^2)\rightarrow u(z,t) \qquad \mbox{ as } \varepsilon\rightarrow 0. \tag{178}\]

Analogously, we can prove that \[J_2(v^{\varepsilon})(z,t-\varepsilon^2)\rightarrow v(z,t), \qquad \mbox{ as } \varepsilon\rightarrow 0. \tag{179}\]

Now, if we recall that \(u(z,t)\geq v(z,t)+2\eta\), we obtain \[\label{river-plate} J_1(u^{\varepsilon})(z,t-\varepsilon^2)\geq J_2(v^{\varepsilon})(z,t-\varepsilon^2)+\eta, \tag{180}\] if \(\varepsilon>0\) is small enough. Then, using de DPP and (180) we have \[\label{DPPu} \begin{array}{ll} \displaystyle u^{\varepsilon}(z,t)=\frac{1}{2} J_1(u^{\varepsilon})(z,t-\varepsilon^2)+\frac{1}{2}\max\{J_1(u^{\varepsilon})(z,t-\varepsilon^2),J_2(v^{\varepsilon})(z,t-\varepsilon^2)\} =J_1(u^{\varepsilon})(z,t-\varepsilon^2), \end{array} \tag{181}\] for all \((z,t)\in B_{\frac{\delta}{2}}(x_0)\times(t_0-\frac{\delta}{2},t_0+\frac{\delta}{2})\) and for every \(\varepsilon>0\) small enough. Let us prove that \(u\) is viscosity subsolution to the equation (164). Given now \(\varphi\in\mathscr{C}^{2,1}(\Omega\times(0,T))\) such that \((u-\varphi)(x_0,t_0)=0\) is maximum of \(u-\varphi\). Then, from the uniform convergence there exists sequence of points \((x_{\varepsilon},t_{\varepsilon})_{\varepsilon>0}\subset B_{\frac{\delta}{2}}(x_0)\times(t_0-\frac{\delta}{2},t_0+\frac{\delta}{2})\), such that \(x_{\varepsilon}\rightarrow x_0\), \(t_{\varepsilon}\rightarrow t_0\) and \[(u^{\varepsilon}-\varphi)(x_{\varepsilon},t_{\varepsilon})\geq(u^{\varepsilon}-\varphi)(y,t)-o(\varepsilon^2), \tag{182}\] for all \((y,t)\in\Omega\times(0,T)\), that is, \[\label{ump2} u^{\varepsilon}(y,t)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\leq\varphi(y,t)-\varphi(x_{\varepsilon},t_{\varepsilon})+o(\varepsilon^2). \tag{183}\]

From the DPP (1) and using (183) we have \[\begin{aligned} \label{igualDPP} \displaystyle 0=&\frac{1}{2} J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2) \Big\}-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon}) \notag\\[8pt] \displaystyle \qquad =& J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\le J_1(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})+o(\varepsilon^2). \end{aligned} \tag{184}\]

If we add and subtract \(\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)\) we get \[\label{subphi} 0\le \varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})+J_1(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)+o(\varepsilon^2). \tag{185}\]

Passing to the limit as before we obtain \[0\leq -\frac{\partial\varphi}{\partial t}(x_0,t_0)+\frac{\alpha_1}{2}\Delta^1_{\infty}\varphi(x_0,t_0)+\frac{(1-\alpha_1)}{2(N+2)}\Delta\varphi(x_0,t_0)+h_1(x_0,t_0), \tag{186}\] when \(\nabla \varphi (x_0,t_0) \neq 0\) and \[0\leq -\frac{\partial\varphi}{\partial t}(x_0,t_0)+\frac{\alpha_1}{2}\lambda_N (D^2 \varphi(x_0,t_0))+\frac{(1-\alpha_1)}{2(N+2)}\Delta\varphi(x_0,t_0)+h_1(x_0,t_0), \tag{187}\] if \(\nabla \varphi (x_0,t_0) = 0\). Hence we arrived to \[\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_p^1\varphi(x_0,t_0)\leq h_1(x_0,t_0). \tag{188}\]

This proves that \(u\) is viscosity subsolution of the equation (164) inside the open set \(\{u>v\}\).

As we have that \(u\) is a viscosity supersolution in the whole \(\Omega\times(0,T)\), we conclude that \(u\) is viscosity solution to \[\frac{\partial u}{\partial t}(x_0,t_0)-\Delta_p^1 u(x_0,t_0)=h_1(x_0,t_0) , \tag{189}\] in the set \(\{u>v\}\).

(5) The equation for \(v\): The case that \(v\) is a viscosity subsolution to \[\frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)= h_2(x,t),\] is analogous. Here we use that \[\begin{aligned} \label{desDPP-v} \displaystyle 0=&\min\Big\{ J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-v(x_{\varepsilon},t_{\varepsilon}), J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon}) \Big\}\notag\\[8pt] \leq& J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon}). \end{aligned} \tag{190}\]

To show that \(v\) is a viscosity solution to \[\frac{\partial v}{\partial t}(x_0,t_0)-\Delta_q^1 v(x_0,t_0)= h_2(x_0,t_0), \tag{191}\] if \((x_0,t_0)\in\left(\Omega\times(0,T)\right)\cap\{u>v\}\) we proceed as before.

(6) Extra condition: Now let us prove the extra condition \[\displaystyle \left(\frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)\right)+\left(\frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)\right)= h_1(x,t)+h_2(x,t) ,\] for \((x,t)\in \Omega\times (0,T)\). Notice that it is necesary to prove only the case \(u=v\), because in the set \(\{u>v\}\) we have \[\frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)=h_1(x,t) \quad \mbox{and} \quad \frac{\partial v}{\partial t}(x,t)-\Delta_q^1 v(x,t)=h_2(x,t). \tag{192}\]

Let us start proving the subsolution case. Given \((x_0,t_0)\in\{u=v\}\) and \(\varphi\in\mathscr{C}^{2,1}\) such that \((u-\varphi)(x_0,t_0)=0\) is maximum of \(u-\varphi\). Notice that since \(v(x_0,t_0)=u(x_0,t_0)\) and \(v\leq u\) in \(\Omega\times(0,T)\) we also have that \((v-\varphi)(x_0,t_0)=0\) is maximum of \(v-\varphi\). Then, from the uniform convergence there exists a sequence of points \(\{(x_{\varepsilon},t_{\varepsilon})\}_{\varepsilon>0}\subset B_{\frac{\delta}{2}}(x_0)\times(0,T)\), such that \(x_{\varepsilon}\rightarrow x_0\), \(t_{\varepsilon}\rightarrow t_0\), and \[\label{umenosphi} (u^{\varepsilon}-\varphi)(x_{\varepsilon},t_{\varepsilon})\geq(u^{\varepsilon}-\varphi)(y,t)+o(\varepsilon^2), \tag{193}\] for all \((y,t)\in\Omega\times(0,T)\). Let us consider two cases:

Case 1. Suppose that \(u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})>v^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\) for a subsequence such that \(\varepsilon_j\rightarrow 0\). Let us notice that, if \[J_1(u^{\varepsilon})(z,t)<J_2(v^{\varepsilon})(z,t),\] we have that \[u^{\varepsilon}(z,t)=\frac{1}{2} J_1(u^{\varepsilon})(z,t)+\frac{1}{2} J_2(v^{\varepsilon})(z,t) \quad \mbox{and} \quad v^{\varepsilon}(z,t)=\frac{1}{2} J_1(u^{\varepsilon})(z,t)+\frac{1}{2} J_2(v^{\varepsilon})(z,t), \tag{194}\] and then we get \[u^{\varepsilon}(z,t)=v^{\varepsilon}(z,t),\] in this case.

This remark implies that when \(u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})>v^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\) we have \[J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j})\geq J_2(v^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}). \tag{195}\]

If we use the DPP (1) we get \[\begin{aligned} \displaystyle 0=&\frac{1}{2}\left(J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\right)\notag\\ &+\frac{1}{2} \max\{J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j}) , J_2(v^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\} \notag\\[8pt] \displaystyle =&\frac{1}{2}\left(J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\right)+\frac{1}{2} \left(J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\right)\notag\\[8pt] \displaystyle =&J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j}), \end{aligned} \tag{196}\] and using (193) we obtain \[0=J_1(u^{\varepsilon})(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-u^{\varepsilon}(x_{\varepsilon_j},t_{\varepsilon_j})\leq J_1(\varphi)(x_{\varepsilon_j},t_{\varepsilon_j}-\varepsilon_j^2)-\varphi(x_{\varepsilon_j},t_{\varepsilon_j}). \tag{197}\]

Taking limit as \(\varepsilon_j\rightarrow 0\) as before we get \[\label{ine-1} \frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{p}\varphi(x_0,t_0)\leq h_1(x_0,t_0). \tag{198}\]

We have proved before that \(v\) is a subsolution to \[\frac{\partial v}{\partial t}(x,t)-\Delta_{q}v(x,t) = h_2(x,t),\] in the whole \(\Omega\times(0,T)\). Therefore, as \((v-\varphi)(x_0,t_0)=0\) is a maximum of \(v-\varphi\) we get \[\label{ine-2} \frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{q}\varphi(x_0,t_0)\leq h_2(x_0,t_0). \tag{199}\]

Thus, from (198) and (199) we conclude that \[\left(\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{p}\varphi(x_0,t_0)\right)+ \left(\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{q}\varphi(x_0,t_0)\right)\leq h_1(x_0,t_0)+h_2(x_0,t_0). \tag{200}\]

Case 2. If \(u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})=v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\) for \(\varepsilon<\varepsilon_0\). Using the DPP (1) we have \[\begin{aligned} \displaystyle u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})=&\frac{1}{2} J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)+\frac{1}{2} J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), \notag\\[8pt] \displaystyle v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})=&\frac{1}{2} J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2) +\frac{1}{2} J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), \end{aligned} \tag{201}\] then we get \[\begin{aligned} \displaystyle \max\{J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)\}=&J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), \notag\\[8pt] \displaystyle \min\{J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2), J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)\}=&J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2). \end{aligned} \tag{202}\]

If we use again (193) we obtain \[\begin{array}{ll} \varphi(y,t)-\varphi(x_{\varepsilon},t_{\varepsilon})\geq u^{\varepsilon}(y,t)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})+o(\varepsilon^2) \geq v^{\varepsilon}(y,t)-v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})+o(\varepsilon^2), \end{array} \tag{203}\] here we used that \(u^{\varepsilon}\geq v^{\varepsilon}\) and \(u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})= v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\). Thus \[\begin{aligned} \displaystyle 0=&\frac{1}{2} \left(J_1(u^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-u^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\right)+\frac{1}{2} \left(J_2(v^{\varepsilon})(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-v^{\varepsilon}(x_{\varepsilon},t_{\varepsilon})\right)\notag\\[8pt] \displaystyle \quad \leq& \frac{1}{2} \left(J_1(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})\right)+\frac{1}{2} \left(J_2(\varphi)(x_{\varepsilon},t_{\varepsilon}-\varepsilon^2)-\varphi(x_{\varepsilon},t_{\varepsilon})\right). \end{aligned} \tag{204}\]

Taking limit as \(\varepsilon\rightarrow 0\) we conclude that \[\left(\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{p}\varphi(x_0,t_0)\right)+ \left(\frac{\partial\varphi}{\partial t}(x_0,t_0)-\Delta_{q}\varphi(x_0,t_0)\right)\leq h_1(x_0,t_0)+h_2(x_0,t_0). \tag{205}\]

Then, we get the subsolution case in the viscosity sense (taking care of the semicontinuous envelopes when the gradient of \(\varphi\) vanishes). We have just proved that the extra condition is verified with an inequality when we touch \(u\) and \(v\) from above at some point \((x_0,t_0)\) with a smooth test function.

The proof that the other inequality holds when we touch \(u\) and \(v\) from below is analogous and hence we omit the details. ◻

7. Final remarks

Below we describe some possible extensions of our results.

7.1. Parabolic/elliptic system

Suppose that we propose a different game on one board, say the second one. In this board the players play Tug-of-War with noise without change the time variable. That is, if the token remains in \((x_k,t_k,2)\in\Omega\times(0,T)\times\{1,2\}\) and \(j_{k+1}=2\), the next position is chosen playing Tug-of-War with noise at the level \(t_k\). That is, \(t_{k+1}=t_k\).

Hence, we obtain a game played on two boards. On the first board play changing \(t\) to \(t-\varepsilon^2\), while in the second board, the token remains in the same time level. This game has associated the following DPP \[\label{E-P} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)=\frac{1}{2} J_1(u^{\varepsilon})(x,t-\varepsilon^2)+\frac{1}{2}\max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t)\Big\} & (x,t) \in \Omega\times (0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t)=\frac{1}{2} J_2(v^{\varepsilon})(x,t)+\frac{1}{2}\min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t)\Big\} & (x,t) \in \Omega\times (0,T), \end{array} \right. \tag{206}\] with the boundary conditions \[\left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t) = f(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t) = g(x,t), & (x,t) \in (\mathbb{R}^{N} \backslash \Omega)\times [0,T), \end{array} \right. \tag{207}\] and initial condition \[\left\lbrace \begin{array}{ll} \displaystyle u^\varepsilon(x,0)=u_0(x), & x\in \Omega. \end{array} \right. \tag{208}\]

If we repeat the computations that we used in this paper we will obtain a pair of continuous functions \((u,v)\), obtained as uniform limit of the solutions to the DPP (206). These functions solve the following system. \[\label{E-P-DO1} \left\lbrace \begin{array}{ll} \displaystyle u (x,t) \geq v(x,t) & \ (x,t)\in\overline{\Omega}\times [0,T), \\[8pt] \displaystyle \frac{\partial u}{\partial t}(x,t)-\Delta_{p}^{1}u(x,t)\geq h_1(x,t) & (x,t)\in\Omega\times (0,T), \\[8pt] \displaystyle -\Delta_q^1 v(x,t)\leq h_2(x,t) & (x,t)\in\Omega\times (0,T),\\[8pt] \displaystyle \frac{\partial u}{\partial t}(x,t)-\Delta_p^1 u(x,t)=h_1(x,t) & (x,t)\in(\Omega\times (0,T))\cap\{u>v\},\\[8pt] \displaystyle -\Delta_q^1 v(x,t)=h_2(x,t) & (x,t)\in(\Omega\times (0,T))\cap\{u>v\}, \end{array} \right. \tag{209}\] with the boundary conditions \[\label{EPBC} \left\lbrace \begin{array}{ll} \displaystyle u(x,t) = f(x,t), & (x,t) \in \partial \Omega\times [0,T), \\[8pt] \displaystyle v(x,t) = g(x,t), & (x,t) \in \partial \Omega\times [0,T), \end{array} \right. \tag{210}\] and the initial condition \[\label{EPIC} \left\lbrace \begin{array}{ll} \displaystyle u(x,0)=u_0(x), & x\in\Omega. \end{array} \right. \tag{211}\] Notice that there is no initial condition for the second componnent \(v\) both in the DPP and in the limit PDE system. This is due to the game rules in the DPP, and the nature of the elliptic equation in the system. Regarding the functions \(v^\varepsilon\) and \(v\), the variable \(t\) acts just as a parameter.

Here we remark that to construct a solution to the DPP (206) we can use the arguments in the proof of Theorem 2 contructing an increasing sequence of subsolutions iterating the DPP.

7.2. \(n\) membranes

We can generalize the game to an \(n\)-dimensional system. Let us suppose that we have for \(1\leq k\leq n\) \[\begin{array}{ll} \displaystyle J_k(w)(x,t)=\alpha_k\left[\frac{1}{2} \sup_{y \in B_{\varepsilon}(x)}w(y,t-\varepsilon^2) + \frac{1}{2} \inf_{y \in B_{\varepsilon}(x)}w(y,t-\varepsilon^2)\right]+(1-\alpha_k) {{\Huge{f}}_{B_{\varepsilon}(x)}}w(y,t-\varepsilon^2)dy-\varepsilon^2h_k(x,t-\varepsilon^2). \end{array} \tag{212}\]

These games have associated the operators \[L_k(w)=-\Delta^1_{p_k}w+h_k, \tag{213}\] where \[\frac{\alpha_k}{1-\alpha_k}=\frac{p_k-2}{N+2}.\]

Given \(w_1\geq w_2\geq\dots\geq w_n\) and defined outside \(\Omega\times(0,T)\), we can consider the DPP \[\label{DPPEXTn} \left\lbrace \begin{array}{ll} \displaystyle u_k^{\varepsilon}(x,t)=\frac{1}{2}\max_{i\geq k}\Big\{ J_i(u_i^{\varepsilon})(x,t-\varepsilon^2)\Big\}+\frac{1}{2} \min_{l\leq k}\Big\{ J_l(u_l^{\varepsilon})(x,t-\varepsilon^2)\Big\} , & x \in \Omega\times(0,T), \\[8pt] u_k^{\varepsilon}(x,t) = w_k(x,t), \qquad & x \in (\Omega\times(0,T))^c. \end{array} \right. \tag{214}\] for \(1\leq k\leq n\).

This DPP is associated to a game that is played in \(n\) boards. In board \(k\) a fair coin is tossed and the winner is allowed to change boards but Player I can only choose to change to a board with index bigger or equal than \(k\) while Player II may choose a board with index smaller or equal than \(k\).

The functions \((u_1^{\varepsilon}, \cdots, u_n^{\varepsilon})\) converge uniformly as \(\varepsilon \to 0\) (along a subsequence) to continuous functions \(\{u_k\}_{1\leq k\leq n}\) that are viscosity solutions to the following parabolic \(n\) membranes problem, \[\label{EDEXn} \left\lbrace \begin{array}{ll} \displaystyle u_k(x,t) \geq u_{k+1}(x,t) \qquad & \ \Omega\times(0,T), \\[8pt] \displaystyle L_k(u_k)\geq 0, \quad \quad L_{k+l}(u_{k+l})\leq 0 \qquad & \ \{u_{k-1}>u_k\equiv u_{k+1}\equiv\dots\equiv u_{k+l}>u_{k+l+1}\},\\[8pt] \displaystyle L_k(u_k)+L_{k+l}(u_{k+l}) =0 & \ \{u_{k-1}>u_k\equiv u_{k+1}\equiv\dots\equiv u_{k+l}>u_{k+l+1}\},\\[8pt] \displaystyle L_k(u_k)=0 & \ \{u_{k-1}>u_k>u_{k+1}\},\\[8pt] u_k(x,t) = w_k(x,t) \qquad & \ (\partial\Omega\times(0,T))\cup(\overline{\Omega}\times\{0\}). \end{array} \right. \tag{215}\] for \(1\leq k\leq n\).

Notice that here the extra condition \[L_k(u_k)+L_{k+l}(u_{k+l}) =0, \qquad \ (x,t)\in\{u_{k-1}>u_k\equiv u_{k+1}\equiv\dots\equiv u_{k+l}>u_{k+l+1}\},\] appears.

7.3. Playing with an unfair coin modifies the extra condition

One can also deal with the game in which the coin toss that is used to determine if the player can make the choice to change boards or not is not a fair coin. Assume that a coin is tossed in the first board with probabilities \(\gamma\) and \((1-\gamma)\) and in the second board with reverse probabilities, \((1-\gamma)\) and \(\gamma\). In this case the equations that are involved in the DPP read as \[\label{DPPEXT.rem} \left\lbrace \begin{array}{ll} \displaystyle u^{\varepsilon}(x,t)=\gamma \max\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} + (1-\gamma) J_1(u^{\varepsilon})(x,t-\varepsilon^2) & \qquad (x,t)\in\Omega\times(0,T), \\[8pt] \displaystyle v^{\varepsilon}(x,t)=(1-\gamma) \min\Big\{ J_1(u^{\varepsilon})(x,t-\varepsilon^2), J_2(v^{\varepsilon})(x,t-\varepsilon^2)\Big\} +\gamma J_2(v^{\varepsilon})(x,t-\varepsilon^2) & \qquad (x,t)\in\Omega\times(0,T). \end{array} \right. \tag{216}\]

In this case, these functions converge up to a subsequence to a pair of functions \((u,v)\), viscosity solution to the equation (19) with the extra condition \[\label{extra-cond.intro.rem} \gamma \left(-\Delta_{p}^{1}u(x,t)+ h_1(x,t)\right)+ (1-\gamma)\left(-\Delta_q^1 v(x,t)- h_2(x,t)\right) =0, \qquad \ (x,t)\in \Omega\times(0,T), \tag{217}\]

References

  1. Doob, J. L. (1971). What is a Martingale?. The American Mathematical Monthly, 78(5), 451-463.

  2. Doob, J. L. (2001). Classical Potential Theory and Its Probabilistic Counterpart. Springer Berlin, Heidelberg.

  3. Doob, J. L. (1954). Semimartingales and subharmonic functions. Transactions of the American Mathematical Society, 77(1), 86-121.

  4. Knapp, A. W. (1965). Connection between Brownian motion and potential theory. Journal of Mathematical Analysis and Applications, 12(2), 328-349.

  5. Williams, D. (1991). Probability With Martingales. Cambridge University Press.

  6. Peres, Y., Schramm, O., Sheffield, S., & Wilson, D. (2009). Tug-of-war and the infinity Laplacian. Journal of the American Mathematical Society, 22(1), 167-210.

  7. Manfredi, J. J., Parviainen, M., & Rossi, J. D. (2012). Dynamic programming principle for tug-of-war games with noise. ESAIM: Control, Optimisation and Calculus of Variations, 18(1), 81-90.

  8. Manfredi, J. J., Parviainen, M., & Rossi, J. D. (2012). On the definition and properties of \(p\)-harmonious functions. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 11(2), 215-241.

  9. Blanc, P., & Rossi, J. D. (2019). Game Theory and Partial Differential Equations (Vol. 31). Walter de Gruyter GmbH & Co KG.

  10. Lewicka, M. (2020). A Course on Tug-of-War Games With Random Noise. Springer International Publishing.

  11. Akagi, G., Juutinen, P., & Kajikiya, R. (2009). Asymptotic behavior of viscosity solutions for a degenerate parabolic equation associated with the infinity-Laplacian. Mathematische Annalen, 343(4), 921-953.

  12. Akagi, G., & Suzuki, K. (2007). On a certain degenerate parabolic equation associated with the infinity-Laplacian. Discrete and Continuous Dynamical Systems, 2007, 18-27.

  13. Akagi, G., & Suzuki, K. (2008). Existence and uniqueness of viscosity solutions for a degenerate parabolic equation associated with the infinity-Laplacian. Calculus of Variations and Partial Differential Equations, 31(4), 457-471.

  14. Liu, F., & Jiang, F. (2019). Parabolic biased infinity Laplacian equation related to the biased tug-of-war. Advanced Nonlinear Studies, 19(1), 89-112.

  15. Del Pezzo, L. M., & Rossi, J. D. (2014). Tug-of-War games and parabolic problems with spatial and time dependence. Differential and Integral Equations, 27(3-4), (2014). 269–288.
  16. Manfredi, J. J., Parviainen, M., & Rossi, J. D. (2010). An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games. SIAM Journal on Mathematical Analysis, 42(5), 2058-2081.

  17. Miranda, A., & Rossi, J. D. (2020). A game theoretical approach for a nonlinear system driven by elliptic operators. SN Partial Differential Equations and Applications, 1(4), 14.

  18. Miranda, A., & Rossi, J. D. (2023). A game theoretical approximation for a parabolic/elliptic system with different operators. Discrete & Continuous Dynamical Systems: Series A, 43(3-4), 1625–1656.

  19. Mitake, H., & Tran, H. (2017). Weakly coupled systems of the infinity Laplace equations. Transactions of the American Mathematical Society, 369(3), 1773-1795.

  20. Caffarelli, L., De Silva, D., & Savin, O. (2017, July). The two membranes problem for different operators. In Annales de l’Institut Henri Poincaré C, Analyse non linéaire (Vol. 34, No. 4, pp. 899-932). No longer published by Elsevier.

  21. Caffarelli, L., Duque, L., & Vivas, H. (2018). The two membranes problem for fully nonlinear operators. Discrete & Continuous Dynamical Systems: Series A, 38(12), 6015–6027.

  22. Vivas, H. A. (2019). The two membranes problem for fully nonlinear local and nonlocal operators (Doctoral dissertation). https://repositories.lib.utexas.edu/handle/2152/74361.

  23. Miranda, A., & Rossi, J. D. (2023). Games for the two membranes problem. Orbita Mathematicae, 1(1), 59-101.

  24. Manfredi, J., Parviainen, M., & Rossi, J. (2010). An asymptotic mean value characterization for \(p\)-harmonic functions. Proceedings of the American Mathematical Society, 138(3), 881-889.

  25. Crandall, M. G., Ishii, H., & Lions, P. L. (1992). User’s guide to viscosity solutions of second order partial differential equations. Bulletin of the American Mathematical Society, 27(1), 1-67.

  26. Blanc, P., Pinasco, J., & Rossi, J. (2017). Maximal operators for the p-Laplacian family. Pacific Journal of Mathematics, 287(2), 257-295.