MPT 2.6 Changelog



"Design your own MPC"

This is the most useful feature in the whole history of MPT! And the credits go to Johan Löfberg and his YALMIP. The function mpt_ownmpc allows you to add (almost) arbitrary constraints to an MPC setup and to define a custom objective functions.

First we explain the general usage of the new function. Design of the custom MPC controllers is divided into three parts:
  1. Design phase. In this part, general constraints and a corresponding cost function are designed
  2. Modification phase. In this part, the user is allowed to add custom constraints and/or to modify the cost function
  3. Computation phase. In this part, either an explicit or an on-line controller which respects user constraints is computed.
Design phase:
Aim of this step is to obtain constraints which define a given MPC setup, along with an associated cost function, and variables which represent system states, inputs and outputs at various prediction steps. In order to obtain said elements for the case of explicit MPC controllers, call:
>> [CON, OBJ, VARS] = mpt_ownmpc(sysStruct, probStruct)
or, for on-line MPC controllers, call:
>> [CON, OBJ, VARS] = mpt_ownmpc(sysStruct, probStruct, 'online')
Here the variable CON represents a set of constraints, OBJ denotes the optimization objective and VARS is a structure with the fields VARS.x (predicted states), VARS.u (predicted inputs) and VARS.y (predicted outputs). Each element is given as a cell array, where each element corresponds to one step of the prediction (i.e. VARS.x{1} denotes the initial state x0, VARS.x{2} is the first predicted state x1, etc.) If a particular variable is a vector, it can be indexed directly to refer to a particular element, e.g. VARS.x{3}(1) refers to the first element of the 2nd predicted state (i.e. x2).

Modification phase:
Now you can start modifying the MPC setup by adding your own constraints and/or by modifying the objective. See examples below for more information about this topic.

Computation phase:
Once you have modified the constraints and/or the objective according to your needs, you can compute an explicit controller by
>> ctrl = mpt_ownmpc(sysStruct, probStruct, CON, OBJ, VARS)
or an on-line MPC controller by
>> ctrl = mpt_ownmpc(sysStruct, probStruct, CON, OBJ, VARS, 'online')


Example: Polytopic constraints 1
Say we would like to introduce polytopic constraints of the form H*x(k) <= K on each predicted state, including the initial state x(0). To do that, we simply add these constraints to our set CON:
for k = 1:length(VARS.x)
  CON = CON + set(H * VARS.x{k} <= K);
end
Believe or not, this is all you need to do to define polytopic constraints. Easy, isn't it? You can now proceed with the computation phase, which will give you a controller which respects given constraints.

Example: Polytopic constraints 2
We now extend the previous example and add the specification that polytopic constraints should only be applied on the 1st, 3rd and 4th predicted state, i.e. on x(1), x(3) and x(4). It is important to notice that the variables contained in the VARS structure are organized in cell arrays, where the first element of VARS.x corresponds to x(0), i.e. to the initial condition. Therefore to meet or specifications, we would write following code:
for k = [1 3 4],
  % VARS.x{1} corresponds to x(0)
  % VARS.x{2} corresponds to x(1)
  % VARS.x{3} corresponds to x(2)
  % VARS.x{4} corresponds to x(3)
  % VARS.x{5} corresponds to x(4)
  % VARS.x{6} corresponds to x(5)
  CON = CON + set(H * VARS.x{k+1} <= K);
end    

Example: Move blocking
Say we want to use more complicated move blocking with following properties: u_0==u_1, (u_1-u_2)==(u_2-u_3), and u_3==K*x_2. These requirements can be implemented by
% VARS.u{1} corresponds to u(0)
% VARS.u{2} corresponds to u(1), and so on

% u_0 == u_1
>> CON = CON + set(VARS.u{1} == VARS.u{2});

% (u_1-u_2) == (u_2-u_3)
>> CON = CON + set((VARS.u{2}-VARS.u{3}) == (VARS.u{3}-VARS.u{4}));

% u_3 == K*x_2
>> CON = CON + set(VARS.u{4} == K * VARS.x{3});

Example: Mixed constraints
As illustrated in the move blocking example above, one can easily create constraints which involve variables at various stages of the prediction. In addition, it is also possible to add constraints which involve different types of variables. For instance, we may want to add a constraint that the sum of control inputs and system outputs at each step must be between certain bounds. This specification can be expressed by:
for k = 1:length(VARS.u)
  CON = CON + set(lowerbound < VARS.y{k} + VARS.u{k} < upperbound);
end

Example: Equality constraints
Say we want to add a constraint that the sum of all predicted control actions along the prediction horizon should be equal to zero. This can easily be done by
>> CON = CON + set((VARS.u{1}+VARS.u{2}+VARS.u{3}+...+VARS.u{end}) == 0);
or, in a vector notation:
>> CON = CON + set(sum([VARS.u{:}]) == 0);

Example: Constraints involving norms
We can extend the previous example and add a specification that the sum of absolute values of all predicted control actions should be less than some given bound. To achieve this goal, we can make use of the 1-norm function, which exactly represents the sum of absolute values of each element:
>> CON = CON + set(norm([V.u{:}], 1) <= bound);
The same can of course be expressed in a more natural form:
>> CON = CON + set(sum(abs([V.u{:}])) <= bound);

Example: Contraction constraints
The norm-type constraints can be used to define "contraction" constraints, i.e. constraints which force state x(k+1) to be closer to the origin (in the 1/Inf-norm sense) than the state x(k) has been:
for k = 1:length(VARS.x)-1
  CON = CON + set(norm(VARS.x{k+1}, 1) <= norm(VARS.x{k}, 1));
end
Note that these types of constraints are not convex and resulting problems will be difficult to solve (time-wise).

Example: Obstacle avoidance
It is a typical requirement of control synthesis to guarantee that the system states will avoid some set of "unsafe" states (typically an obstacle or a set of dangerous operating conditions). You can now solve these kind of problems with MPT if you add suitable constraints. If you, for instance, want to avoid a polytopic set of states, proceed as follows:
% first define set of unsafe states
>> Punsafe = polytope(H, K);

% now define the complement of the "usafe" set versus some large box,
% to obtain the set of states which are "safe":
>> Pbox = unitbox(dimension(Punsafe), 100);
>> Psafe =  Pbox \ Punsafe;

% now add constraints that each predicted state must be inside
% of the "safe" set of states
for k = 1:length(VARS.x)
  CON = CON + set(ismember(VARS.x{k}, Psafe));
end
Here set(ismember(VARS.x{k}, Psafe)) will impose a constraint which tells MPT that it must guarantee that the state x(k) belongs to at least one polytope of the polytope array Psafe, and hence avoiding the "unsafe" set Punsafe. Notice that this type of constraints requires binary variables to be introduced, making the optimization problem difficult to solve.


Logic constraints
Logic constraints in the form of IF-THEN conditions can be added as well. For example, we may want to require that if the first predicted input u(0) is smaller or equal to zero, then the next input u(1) has to be bigger than 0.5:
% if u(0) <= 0 then u(1) must be >= 0.5
>> CON = CON + set(implies(VARS.u{1} <= 0, VARS.u{2} >= 0.5));
Notice that this constraint only acts in one direction, i.e. if u(0)<=0 then u(1)>=0.5, but it does not say what should be the value of u(1) if u(0)>0.

To add an "if and only if" constraint, use the iff() operator:
% if u(0) <= 0 then u(1) >= 0.5, and
% if u(0) > 0 then u(1) < 0.5
>> CON = CON + set(iff(VARS.u{1} <= 0, VARS.u{2} >= 0.5));
which will guarantee that if u(0)>0, then the value of u(1) will be smaller than 0.5.


Custom optimization objective
In the last example we show how to define your own objective functions. Depending on the value of probStruct.norm, the objective can either be quadratic, or linear. By default, it is defined according to standard MPC theory (see help mpt_probStruct for details).

To write a custom cost function, simply sum up the terms you want to penalize. For instance, the standard quadratic cost function can be defined by hand as follows:
OBJ = 0;
for k = 1:length(VARS.u),
  % cost for each step is given by x'*Q*x + u'*R*u
  OBJ = OBJ + VARS.x{k}' * Q * VARS.x{k};
  OBJ = OBJ + VARS.u{k}' * R * VARS.u{k};
end
  
% cost for the last predicted state x_N'*P_N*x_N
OBJ = OBJ + VARS.x{end}' * P_N * VARS.x{end};
For 1/Inf-norm cost functions, you can use the overloaded norm() operator, e.g.
OBJ = 0;
for k = 1:length(VARS.u),
  % cost for each step is given by ||Q*x|| + ||R*u||
  OBJ = OBJ + norm(Q * VARS.x{k}, Inf);
  OBJ = OBJ + norm(R * VARS.u{k}, Inf;
end
  
% cost for the last predicted state ||P_N*x_N||
OBJ = OBJ + norm(P_N * VARS.x{end}, Inf);
If you, for example, want to penalize deviations of predicted outputs and inputs from a given time-varying trajectories, you can do so by defining a cost function as follows:
yref = [4 3 2 1];
uref = [0 0.5 0.1 -0.2]
OBJ = 0;
for k = 1:length(yref)
  OBJ = OBJ + (VARS.y{k} - yref(k))' * Qy * (VARS.y{k} - yref(k));
  OBJ = OBJ + (VARS.u{k} - uref(k))' * R * (VARS.u{k} - uref(k));
end

Defining new variables
Remember the avoidance example? There we have used constraints to tell the controller that it should avoid a given set of unsafe states. Let's now modify that example a bit. Instead of adding constraints, we will introduce a binary variable which will take a true value if the system states are inside of a given location. Subsequently we will add a high penalty on that variable, which will tell the MPC controller that it should avoid the set if possible.
% first define the set which we want to avoid
>> Pavoid = polytope(H, K);

% define a new scalar binary variable
>> d = binvar(1, 1);

% now add a constraint which forces "d" to be true if x(0) is
% inside of Pavoid
>> CON = CON + set(implies(ismember(VARS.x{1}, Pavoid), d))

% penalize "d" heavily
>> OBJ = OBJ + 1000*d

Removing constraints
When mpt_ownmpc constructs the constraints and objectives, it adds constraints on system states, inputs and outputs, providing they are defined in respective fields of probStruct. Though one may want to remove certain constraints, for instance the target set constraints imposed on the last predicted state. To do so, first notice that each constraint has an associated string tag:
>> Double_Integrator
>> sysStruct.xmax = sysStruct.ymax; sysStruct.xmin = sysStruct.ymin;
>> probStruct.N = 2;
>> [CON, OBJ, VARS] = mpt_ownmpc(sysStruct, probStruct);
>> CON
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  ID|      Constraint|                      Type|                    Tag|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  #1|   Numeric value|          Element-wise 2x1|      umin < u_1 < umax|
|  #2|   Numeric value|          Element-wise 4x1|      xmin < x_1 < xmax|
|  #3|   Numeric value|          Element-wise 4x1|      xmin < x_2 < xmax|
|  #4|   Numeric value|          Element-wise 4x1|      ymin < y_1 < ymax|
|  #5|   Numeric value|   Equality constraint 2x1|   x_2 == A*x_1 + B*u_1|
|  #6|   Numeric value|   Equality constraint 2x1|   y_1 == C*x_1 + D*u_1|
|  #7|   Numeric value|          Element-wise 6x1|            x_2 in Tset|
|  #8|   Numeric value|          Element-wise 4x1|            x_0 in Pbnd|
|  #9|   Numeric value|          Element-wise 2x1|      umin < u_0 < umax|
| #10|   Numeric value|          Element-wise 4x1|      xmin < x_0 < xmax|
| #11|   Numeric value|          Element-wise 4x1|      ymin < y_0 < ymax|
| #12|   Numeric value|   Equality constraint 2x1|   x_1 == A*x_0 + B*u_0|
| #13|   Numeric value|   Equality constraint 2x1|   y_0 == C*x_0 + D*u_0|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Then to remove certain constraints all you need to do is to substract the constraint you want to get rid of, identified by its tag. For instance
>> CON = CON - CON('x_2 in Tset');
>> CON = CON - CON('xmin < x_2 < xmax');
>> CON
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  ID|      Constraint|                      Type|                    Tag|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  #1|   Numeric value|          Element-wise 2x1|      umin < u_1 < umax|
|  #2|   Numeric value|          Element-wise 4x1|      xmin < x_1 < xmax|
|  #3|   Numeric value|          Element-wise 4x1|      ymin < y_1 < ymax|
|  #4|   Numeric value|   Equality constraint 2x1|   x_2 == A*x_1 + B*u_1|
|  #5|   Numeric value|   Equality constraint 2x1|   y_1 == C*x_1 + D*u_1|
|  #6|   Numeric value|          Element-wise 4x1|            x_0 in Pbnd|
|  #7|   Numeric value|          Element-wise 2x1|      umin < u_0 < umax|
|  #8|   Numeric value|          Element-wise 4x1|      xmin < x_0 < xmax|
|  #9|   Numeric value|          Element-wise 4x1|      ymin < y_0 < ymax|
| #10|   Numeric value|   Equality constraint 2x1|   x_1 == A*x_0 + B*u_0|
| #11|   Numeric value|   Equality constraint 2x1|   y_0 == C*x_0 + D*u_0|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
will remove any state constraints imposed on the last predicted state x_2. Alternativelly, it is also possible to identify constraints by their index (ID number in the first column of the above table). For example to remove the constraint on u_0 (constraint number 7 in the list above), one can do
>> CON = CON - CON(7)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  ID|      Constraint|                      Type|                    Tag|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|  #1|   Numeric value|          Element-wise 2x1|      umin < u_1 < umax|
|  #2|   Numeric value|          Element-wise 4x1|      xmin < x_1 < xmax|
|  #3|   Numeric value|          Element-wise 4x1|      ymin < y_1 < ymax|
|  #4|   Numeric value|   Equality constraint 2x1|   x_2 == A*x_1 + B*u_1|
|  #5|   Numeric value|   Equality constraint 2x1|   y_1 == C*x_1 + D*u_1|
|  #6|   Numeric value|          Element-wise 4x1|            x_0 in Pbnd|
|  #7|   Numeric value|          Element-wise 4x1|      xmin < x_0 < xmax|
|  #8|   Numeric value|          Element-wise 4x1|      ymin < y_0 < ymax|
|  #9|   Numeric value|   Equality constraint 2x1|   x_1 == A*x_0 + B*u_0|
| #10|   Numeric value|   Equality constraint 2x1|   y_0 == C*x_0 + D*u_0|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Notice that while the string tags associated to each constraint remain absolute, the relative position of constraints given by the ID number may change.

NOTE!: if norm(), ismember(), or constraints involving binary variables are used, the problem becomes much more complicated to solve!

Support for soft constraints

Since MPT 2.6 it is possible to denote certain constraints as soft. This means that the respective constraint can be violated, but such a violation is penalized. To soften certain constraints, it is necessary to define the penalty on violation of such constraints: In addition, one can also specify the maximum value by which a given constraint can be exceeded: The aforementioned fields also allow to tell that only a subset of state, input, or output constraint should be treated as soft constraints, while the rest of them remain hard. Say, for instance, that we have a system with 2 states and we want to soften only the second state constraint. Then we would write:
>> probStruct.Sx = diag([1 1000])
>> probStruct.sxmax = [0; 10]
Here probStruct.sxmax(1)=0 tells MPT that the first constraint should be treated as a hard constraint, while we are allowed to exceed the second constraints at most by the value of 10, while every such violation will be penalized by the value of 1000.

Please note that soft constraints are not available for minimum-time (probStruct.subopt_lev=1) and low-complexity (probStruct.subopt_lev=2) strategies.

Control of time-varying systems

Time-varying system dynamics or systems with time-varying constraints can now be used for synthesis of optimal controllers. There are couple of limitations, though: To tell MPT that it should consider a time-varying system, define one system structure for each step of the prediction, e.g.
>> Double_Integrator
>> S1 = sysStruct;
>> S2 = sysStruct; S2.C = 0.9*S1.C;
>> S3 = sysStruct; S3.C = 0.8*S1.C;
Here we have three different models which differ in the C element. Now we can define the time-varying model as a cell array of system structures by
>> model = {S1, S2, S3};
>> probStruct.N = 3;
Notice that order of systems in the model variable determines that the system S1 will be used to make predictions of states x(1), while the predicted value of x(2) will be determined based on model S2, and so on. Once the model is defined, you can now compute either the explicit, or an on-line MPC controller using the standard syntax:
>> explicitcontroller = mpt_control(model, probStruct)
>> onlinecontroller   = mpt_control(model, probStruct, 'online')
Systems with time-varying constraints can be defined in similar fashion, e.g.
>> Double_Integrator
>> S1 = sysStruct; S1.ymax = [5; 5]; S1.ymin = [-5; -5];
>> S2 = sysStruct; S2.ymax = [4; 4]; S2.ymin = [-4; -4];
>> S3 = sysStruct; S3.ymax = [3; 3]; S3.ymin = [-3; -3];
>> S4 = sysStruct; S4.ymax = [2; 2]; S4.ymin = [-2; -2];
>> probStruct.N = 4;
>> ctrl = mpt_control({S1, S2, S3, S4}, probStruct);
You can go as far as combining different classes of dynamical systems at various stages of the predictions, for instance you can arbitrary combine linear, Piecewise-Affine (PWA) and Mixed Logical Dynamical (MLD) systems. For instance you can use a detailed PWA model for the first prediction, while having a simple LTI model for the rest:
>> pwa_DI; pwa = sysStruct;                  % PWA model with 4 dynamics
>> Double_Integrator; lti = sysStruct;       % simple LTI model
>> probStruct.N = 5;
>> model = {pwa, pwa, lti, lti, lti};
>> ctrl = mpt_control(model, probStruct);
Remark: time-varying systems can also be used together with the mpt_ownmpc function described above.

MPC for nonlinear systems

With MPT 2.6 you can now solve on-line MPC problems based on nonlinear or piecewise nonlinear systems. In order to define models of such systems, one has to create a special function based on the mpt_nonlinfcn.m template (see for instance this example or this one). Once the describing function is defined, you can use mpt_sys to convert it into format suitable for further computation:
>> sysStruct = mpt_sys(@function_name)
where function_name is the name of the function you have just created. Now you can construct an on-line MPC controller using the standard syntax:
>> ctrl = mpt_control(sysStruct, probStruct, 'online');
or
>> [C, O, V] = mpt_ownmpc(sysStruct, probStruct, 'online');
% modify constraints and objective as needed
>> ctrl = mpt_ownmpc(sysStruct, probStruct, C, O, V, 'online');
After that you can use the controller either in Simulink, or in Matlab-based simulations invoked either by
>> u = ctrl(x0)
or by
>> [X, U, Y] = sim(ctrl, x0, number_of_simulation_steps)
>> simplot(ctrl, x0, number_of_simulation_steps)
Note: nonlinear problems are very difficult to solve, don't be surprised! Check the help of mpt_getInput for description of parameters which can affect quality and speed of the nonlinear solvers. Also note that currently only polynomial type of nonlinearities is supported, i.e. no 1/x terms or log/exp/sin/cos functions are allowed. Moreover, don't even try to use nonlinear models for things like reachability or stability analysis, it wouldn't work.

Move blocking

The move blocking capabilities have been extended and they can now be used in optimal control of Piecewise-Affine (PWA) and Mixed Logical Dynamical (MLD) systems as well. To enable move blocking, define the control horizon in
>> probStruct.Nc = Nc;
where Nc specifies the number of free control moves, and this value should be less than the prediction horizon probStruct.N. Control moves u_0 up to u_(Nc-1) will be then treated as free control moves, while u_Nc ... u_(N-1) will be kept identical to u_(Nc-1), i.e.
u_(Nc-1) == u_Nc == u_(Nc+1) == ... == u_(N-1)

Stability analysis of autonomous dynamical systems

Stability analysis of autonomous systems can now be performed by means of computing Lyapunov functions. To use this new feature, simply call mpt_lyapunov with the system structure as an input:
>> lyap = mpt_lyapunov(sysStruct, 'pwq')
If you get a feasible solution, the given autonomous system is stable. Note that only linear and Piecewise-Affine systems with no inputs can be analyzed this way, nonlinear and MLD systems are not supported.

Export of search trees into C-code

If a binary search tree was calculated for a given explicit controller, it is possible to export such search tree into C-code which can then be either used in S-functions or as a part of a stand-alone control application. To create a C-code representation of the search tree, use the function mpt_exportST as follows:
>> mpt_exportST(ctrl, filename)
where filename is the path to a file which will be created. The controller ctrl used in this example must have the search tree stored inside. If it does not, use the mpt_searchTree function to calculate it:
>> ctrl = mpt_searchTree(ctrl)

Important patches and improvements