PDF version | Download Code… |

##### 1. Problem formulation

Let be an open and bounded Lipschitz Domain and consider the parameter dependent parabolic equation with Dirichlet boundary conditions

(1)

where is the state, is a control function and is given initial datum.

In (1), is a scalar coefficient depending on some parameter and is given. To abridge the notation, we will denote .

Now, consider the following associated optimal control problem

(2)

where is the solution to (1), is a desired observation and is given.

It is classical to prove (see e.g., [3]) that the minimization problem (2) has a unique optimal solution that hereinafter we denote by . Moreover, the optimal control is given by

where can be found from the solution of the optimality system

(3)

In this work, we use a combination of (weak) greedy methods and the so-called *turnpike* property to determine the most relevant values of a parameter-space providing the best possible approximation of the set of parameter dependent optimal controls.

Firstly, we will use the *turnpike* property to reduce the problem of computing the time-dependent optimal controls by computing asymptotic simplifications.

To this end, we begin by considering the stationary version of the state equation

(4)

Now, consider the corresponding minimization problem

(5)

As before, one can prove that, since is strictly convex, continuous and coercive, the minimization problem (5) has a unique optimal solution , where the optimal control is characterized by

and solve the optimality system

(6)

A natural question that arises in this context is to which extent the long horizon optimal controls and states approach the steady ones as . According to [4], we have the following result:

**Theorem 1.1**

Let us consider the control problem (2) and let be the optimal control and state. Then, there exists such that

(7)

where is the optimal control and state corresponding to (6).

This means that we have exponential convergence of the finite horizon control problems to their steady state version as tends to infinity.

Thanks to Theorem 1.1, we can now focus our attention on the steady system (4). We begin by assuming the following hypotheses:

Then, the main idea is to propose a methodology to determine an optimal selection of a finite number of realizations of the parameter so that all controls, for all possible values of , are optimally approximated. More precisely, the problem can be formulated as follows:

**Problem 1.2**

Let us consider the set of controls verifying (5) for each possible value . That is,

(8)

This control set is compact in .

Given , we seek to determine a family of parameters in , whose cardinality depends on , so that the corresponding controls, denoted by are such that for every , there exists such that

#### 2. Greedy optimal control for elliptic problems

##### 2.1. Preliminaries on (weak) greedy algorithms

In this section, we present a short introduction about linear approximation theory of parametric problems based on (weak) greedy algorithms. For a more detailed read, we refer, for instance, to [1, 2]. In general, the goal is to approximate a compact set in a Banach space by a sequence of finite dimensional subspaces of dimension .

The algorithm reads as follows.

The algorithm produces a finite dimensional space that approximates the set within precision .

The best choice of an approximating space is the one producing the smallest approximation error. This smallest error for a compact set is called the *Kolmogorov -width of *, and is defined as

It measures how well can be approximated by a subspace in of a fixed dimension .

#### Algorithm 1: Weak greedy algorithm

initialize:fix and a tolerance parameter ;

- In the first step, choose such that
;- At the general step, having found denote
repeat- choose such that
until;

#### 2.2. Definition of the residual

The main goals of this work is to apply the greedy approach to the family of parameter dependent steady state optimal control problems

(10)

where is the cost functional given by

while is the solution to (4).

Essential for an effective application of a greedy algorithm is the construction of a residual by which one can estimate the distance between two (possible unknowns) controls by some easily computable quantity.

In this way, we introduce the residual operator as

(11)

The residual introduced in the previous subsection enable us to construct a weak greedy algorithm for an effective construction of an approximating linear space of the control manifold . The precise description of the the algorithm is given below

We can summarize the obtained results in the following theorem

**Theorem 2.1**

The proposed greedy algorithm provides the following estimates on the approximate control and approximate optimal state

#### 3. Numerical experiments

We present here two numerical examples where the greedy approach is applied. To illustrate the procedure, we consider the two dimensional domain . We use the elementary finite difference (FD) method. We will approximate the elliptic operator with homogeneous Dirichlet boundary conditions, by using the standard 5-point discretization.

#### 3.1. Greedy test # 1

Let us consider a coefficient of the form

(12)

(13)

In this way, we can readily verify that satisfies **H1** and **H2**.

On the other hand, we take the coefficient as

which clearly fulfills the condition . For the optimal control problem, we set the parameter while the desired target is the -independent function

We take as the region in the -plane (see Figure 2.B).

#### Algorithm 2: Greedy control algorithm – offline part

Initialize:Fix the approximation error .

STEP 1: (discretisation)- Choose a finite subset such that
- for a discretization constant.
STEP 2: (Choosing )- Check the inequality
- If it is satisfied, stop the algorithm. Otherwise, choose the first parameter value as
- and find corresponding optimal primal and dual states ;
STEP 3: (Choosing )- Having chosen calculate and for each .
- Check the approximation criteria
- If the inequality is satisfied, stop the algorithm. Otherwise, determine the next parameter value as
- Find the corresponding optimal primal and dual states and repeat Step 3.

Since the functional is quadratic and coercive, a standard conjugate gradient (CG) algorithm is a simple choice to solve the minimization problem (5). For a given one can directly solve the minimization problem (10). The average time for computing the corresponding control up to a given tolerance of using the CG is around six seconds.

Hypotheses **H1** and **H2** allow us to implement a *naive* approach for approximating the parameterized control set . This approach consists in discretizing the parameter set in a very fine mesh, that we denote it by , and then computing the corresponding control for each parameter in this finite-dimensional set.

Let us take as the uniform discretization of the interval (13) in values. Then, the *naive* approach amounts to solve 100 different times the minimization problem (10). The whole process takes around seconds.

We apply the greedy procedure described in Algorithm 2 over the set and choosing a precision of . The algorithm stops after seven iterations selecting 7 (out of 100) parameter values. The time needed to finish the offline part of the greedy algorithm takes 476 seconds.

The way the parameters are chosen for this test is illustrated in Figure 1.A. In Figure 1.B, we plot the approximation rate of the greedy algorithm corresponding to

Such plot suggests an exponential decay of the approximation rate , which is in accordance with the greedy theory.

Once the offline part is completed, we can construct (approximate) optimal controls by choosing a suitable combination of the optimal states .

In Figure 2, we plot the approximation of the real control with the greedy algorithm for . The approximated control is nearly identical to the obtained by minimizing directly functional (5) for the associated value . In fact, one can obtain for this particular experiment that

In spite of obtaining almost the same solution, the convergence of conjugate gradient method takes 5.4 s compared to 0.488 s in the online greedy part. This fact also shows the computational efficiency of the proposed algorithm.

In Figure 3.A, we plot the solution to (4) using the approximated control . As for the control, we can compute the difference between the real optimal state against the approximate optimal state . In this particular test, we obtain the following estimate

#### 3.2. Greedy test # 2

Here, we present a series of experiments for the case when the potential fulfills . This will be of particular interest in the discussion of the following section.

We consider again the coefficient (12) and the parameter within the range (13), as in the previous test. For this particular case, we choose a constant function and the desired target as follows

We will use as a control region the shape depicted in figure 4. For this particular test, the average time for computing the optimal control for different values of is around nine seconds. Implementing the *naive* approach for the refined mesh implies that at least 900 seconds are needed to finish this process.

Using our computational tool, we choose again the approximation tolerance . The greedy algorithm stops after nine iterations. We present in figure 5a the selected parameters in the order they were chosen by the program. The elapsed time for completing the offline process is 678 seconds.

In Figure 6 we plot the approximation of the optimal control obtained by the greedy method for a chosen value of , while in Figure 7 we show corresponding controlled state and a comparison with the target . For this particular test, the elapsed time to compute the online control is 0.406 seconds while the convergence of the CG to compute the optimal control takes 15.9 seconds. The approximation error with respect to the real control is

while the approximation error for the state using the real and approximated control is

In Figure 7, we observe that the approximation to the desired target is (to a certain extent) better than the one we obtain in greedy test #1. This is because the chosen does not change sign in the domain (cf. Figure 3.B).

where is solution to the parabolic problem

(18)

Here, we present a series of experiments related to the (approximated) optimal steady controls in Sections 3.1 and 3.2. We use them to control the corresponding evolution equation and test their efficiency. We will differentiate two main cases to be studied.

#### The case

In addition to the parameters already set in the greedy test #1 (see Section 3.1), let us take

In this stage, we are going to use the optimal control as a time-independent control for system (18). More precisely, we take

(19)

By plugging such control into equation (18), we obtain the controlled solution displayed in Figure 8. We see that this control steers away from the initial condition at time and reaches in a short amount of time a region near the optimal steady state (cf. Figure 8.C).

For this experiment, the efficiency of the time-independent control is closely related to the fact that . In particular, the conditions on the coefficients and allow to prove that the uncontrolled system (18) is exponentially stable regardless of the initial datum.

In Figure 9, we illustrate the efficiency of the steady control by plotting different curves representing the time evolution of the -norm of solution to (18) with different initial datums and taking (19) as a control. To compare, we have computed the time-dependent solution associated to the optimal control , obtained by minimizing (17), for the given parameter and .

#### The case

Let us take the parameters and results in the greedy test #2 shown in Section 3.2 together with and . As before, we can put the (approximated) steady control in system (18) and test its performance. Recall that we have chosen . We show in Figure 10 the solution to the evolution problem using the steady control . We see that in this case, the steady control lacks to stabilize the system around the steady state shown in figure 10.

In Figure 11, we present further experiments where we illustrate that for some initial data, the controlled solution grows in exponential manner. In fact, only for and a sufficiently small neighborhood of this initial datum, the steady control is effective to control the underlying system.

For this particular case, to see the turnpike property it is necessary to compute the solution to the time-dependent minimization problem (18). We show in Figure 12, the time evolution of the optimal controlled state and control and, as expected, we can see the asymptotic simplification towards the state and control for most of the time horizon. However, we can also see that the control makes a large effort at the beginning of the temporal interval to move the system to this steady state.

Unlike the case , there might be some coefficients such that steady controls, and in particular the approximated controls derived from the greedy approach, are not enough to control the evolution system.

###### Bibliography

**[1]** A. Cohen and R. DeVore *Approximation of high-dimensional parametric PDEs.* Acta Numerica, 24, (2015), 1–159.

**[2]** R. DeVore, (2005) *The Theoretical Foundation of Reduced Basis Methods.* preprint.

**[3]** J.-L Lions. *Optimal control of systems governed by partial differential equations.* Springer-Verlag, 1971.

**[4]** A. Porretta and E. Zuazua. *Long time versus steady state optimal control.* SIAM Journal on Control and Optimization, 51, 6 (2013), 4242-4273.