14.1 Introduction
Bilevel decision-making problems [11, 20, 46, 57], motivated by Stackelberg game theory [50], are hierarchical optimization problems in which their constraints are defined in part by another parametric optimization problem. The decision makers at the upper level and the lower level are respectively termed as the leader and the follower, and make their individual decisions in sequence with the aim of optimizing their respective objectives. As is well-known, bilevel optimization plays an exceedingly important role in different application fields, such as transportation, economics, ecology, engineering and others [21]. Note that a linear bilevel programming problem was proved to be strongly NP-hard [25]. Therefore, solving such a problem is not an easy task.
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle ``\min \limits_{\boldsymbol{x}}"&\displaystyle F(\boldsymbol{x},\boldsymbol{y}){}\\ &\displaystyle \mbox{s.t.}&\displaystyle G(\boldsymbol{x})\le 0, \boldsymbol{y}\in \Psi(\boldsymbol{x}), \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ1.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle f(\boldsymbol{x},\boldsymbol{y}) \\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ2.png)
It is worthwhile noting that the lower level problem may have multiple solutions for every (or some) fixed value of the upper level decision making variable. If the solution of the lower level problem is not unique, it is difficult for the leader to predict which point in Ψ(x) the follower will choose. As a result, it is difficult to determine the leader’s solution. That is the reason why we use the quotation marks in problem (14.1.1).
To overcome this situation, the majority of authors used optimistic formulation and pessimistic formulation, which represent the two different situations between the leader and the follower. In the optimistic formulation (e.g., see Ref. [20] and the references therein), the follower always selects a strategy in Ψ(x) that suits the leader best. Alternatively, pessimistic formulation (e.g., see Ref. [20] and the references therein) refers to the case where the leader protects himself against the worst possible situation.
There are several survey papers on bilevel/multilevel programming/decision-making in the past 20 years. However, these surveys focus on an optimistic bilevel optimization. For example, Ben-Ayed [12], Wen and Hsu [53] presented the basic models, solution definitions, solution approaches and applications of the optimistic linear bilevel optimization. Colson et al. [17, 18], Vicente and Calamai [52] focused on traditional solution concepts and solution approaches for the optimistic bilevel programming problems. Dempe [21] summarized the main research directions and the main fields of applications of bilevel programming, but mainly fixed attention on the optimistic formulation. Kalashnikov et al. [26] presented a survey of bilevel programming and application area.
Sakawa and Nishizaki [47] reviewed interactive fuzzy programming approaches for the optimistic bilevel/multilevel programming problems which focus on cooperative decision-making in decentralised organizations. Zhang et al. [58] reviewed the fuzzy bilevel decision-making techniques which include models, approaches and systems.
Lu et al. [40] identified nine different kinds of relationships amongst followers by establishing a general framework for bilevel multi-follower decision problems in which the lower level problem has multiple followers. Furthermore, Lu et al. [41] analyzed various kinds of relationships between decision entities in a multi-follower trilevel (MFTL) decision problem, and then proposed an MFTL decision making framework, in which 64 standard MFTL decision situations and their possible combinations are identified. Recently, Lu et al. [42] developed the latest research on multilevel decision-making involving theoretical research results and applications.
Sinha et al. [48] presented an introduction to progress made by evolutionary computation towards handing bilevel optimization. Sinha et al. [49] developed a comprehensive review on bilevel optimization from classical to evolutionary approaches, and then discussed a number of potential application problems.
The above surveys papers have provided good references on optimistic bilevel optimization for researchers. Recently, Liu et al. [32] proposed an extensive review on pessimistic bilevel optimization from basic definitions and properties to solution approaches. This chapter will add some models and the latest newly published articles of pessimistic bilevel optimization on the basis of [32].
The remainder of the chapter is organized as follows. In Sect. 14.2, we describe the classification of pessimistic bilevel optimization. In Sects. 14.3 and 14.4, we review definitions and properties of pessimistic bilevel optimization. In Sect. 14.5, some solution approaches are proposed for pessimistic bilevel optimization. Finally, we conclude this chapter and provide some future directions in Sect. 14.6.
14.2 Classification of Pessimistic Bilevel Optimization
In this section, we describe the classification of pessimistic bilevel optimization into two main categories: pessimistic bilevel programming problems (in their general and linear versions) and further generalizations (semivectorial and multi-follower problems) of pessimistic bilevel optimization.
14.2.1 Pessimistic Bilevel Programming Problems
14.2.1.1 Pessimistic Linear Bilevel Programming Problems
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \min \limits_{\boldsymbol{x}\in X} \max \limits_{\boldsymbol{y}\in \Psi(\boldsymbol{x})}\ \boldsymbol{c}^{\top}\boldsymbol{x}+\boldsymbol{d}^{\top}\boldsymbol{y}{} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ3.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}\ge 0}&\displaystyle \boldsymbol{w}^{\top}\boldsymbol{y}\\ &\displaystyle \mbox{s.t.} &\displaystyle B\boldsymbol{y} \le \boldsymbol{b} - A\boldsymbol{x} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ4.png)
Problem (14.2.1) is often called weak linear bilevel programming problem [4]. Note that, when the lower level problem satisfies A = 0, i.e. the follower’s feasible region does not depend on the upper level decision variables, problem (14.2.1) can be called a independent pessimistic linear bilevel programming problem.
14.2.1.2 General Pessimistic Bilevel Programming Problems
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \min \limits_{\boldsymbol{x}\in X^{\prime}} \max \limits_{\boldsymbol{y}\in \Psi(\boldsymbol{x})}\ F(\boldsymbol{x},\boldsymbol{y}).{} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ5.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle f(\boldsymbol{x},\boldsymbol{y}) \\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ6.png)
When the feasible set of the follower does not depend on the upper level decision variables, i.e., g(x, y) in problem (14.2.2) is reduced to g(y), problem (14.2.2) is referred as an independent pessimistic bilevel problem [54]. Note that it is also called weak Stackelberg problem in Ref. [38].
14.2.2 Further Generalizations of Pessimistic Bilevel
14.2.2.1 Pessimistic Semivectorial Bilevel Programming Problems
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \min \limits_{\boldsymbol{x}\in X^{\prime}} \max \limits_{\boldsymbol{y}\in \Psi_{Ef}(\boldsymbol{x})}\ F(\boldsymbol{x},\boldsymbol{y}).{} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ7.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle (f_1(\boldsymbol{x},\boldsymbol{y}),f_2(\boldsymbol{x},\boldsymbol{y}),\cdots,f_q(\boldsymbol{x},\boldsymbol{y})) \\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0. \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ8.png)
14.2.2.2 Pessimistic Bilevel Multi-Follower Programming Problems
- (I)Uncooperative pessimistic linear bilevel multi-follower programming problem [61]. For such a problem, M followers are involved and there is no shared decision variable, objective function or constrain function among them.(14.2.4)where Ψi(x) is the set of solutions to the ith follower’s problem
Here x, c ∈ R n,
,
,
,
, and i = 1, 2, …, M.
- (II)Pessimistic linear bilevel multi-follower programming problem with partially shared variables among followers [62]. For such a problem, M followers are involved and there is a partially shared decision variable v.(14.2.5)where Ψi(x) is the set of solutions of the ith follower’s problem
Here s, v, z i ∈ R l,
and i = 1, 2, …, M.
- (III)Pessimistic referential-uncooperative linear bilevel multi-follower programming problem. For such a problem, M followers are involved in a referential-uncooperative situation, i.e. these followers are uncooperative and they do cross reference information by considering other followers’ decision results in each of their own decision objectives and constraints.(14.2.6)where y −i = (y 1, ⋯ , y i−1, y i+1, ⋯ , y M) and Ψi(x, y −i) is the set of solutions of the ith follower’s problem
Here
and i = 1, 2, ⋯ , M.
14.3 Definitions of Pessimistic Bilevel Optimization
To understand and analyze solution approaches of pessimistic bilevel optimization, this section reviews the notations and definitions of problem (14.2.2).
14.3.1 Definitions
In this subsection, some important definitions of general pessimistic bilevel optimization are itemized below.
- 1.Constraint region of problem (14.2.2):
- 2.Projection of S onto the leader’s decision space:
- 3.Feasible set for the follower ∀x ∈ S(X):
- 4.The follower’s rational reaction set for x ∈ S(X):
- 5.Inducible region or feasible region of the leader:
![$$\displaystyle \begin{aligned}\varphi(\boldsymbol{x}):=\sup \limits_{\boldsymbol{y}\in \Psi(\boldsymbol{x})}\ F(\boldsymbol{x},\boldsymbol{y}).\end{aligned}$$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equa.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \varphi(\boldsymbol{x}^*)&\displaystyle =&\displaystyle F(\boldsymbol{x}^*,\boldsymbol{y}^*),\\ \varphi(\boldsymbol{x}^*)&\displaystyle \le&\displaystyle \varphi(\boldsymbol{x}), \ \forall\ \boldsymbol{x} \in X. \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ20.png)
In the following, some definitions concerning, in particular, approximate solutions (𝜖-Stackelberg solution, 𝜖-Stackelberg equilibrium pair) to the weak Stackelberg problem will be described.
Any point x
∗ verifying v
1 = φ(x
∗) is called a Stackelberg solution to the weak Stackelberg problem. Here, △
A pair (x ∗, y ∗) verifying v 1 = φ(x ∗) and y ∗∈ Ψ(x ∗) is called a Stackelberg equilibrium pair.△
Let 𝜖 > 0 be a given number. A point x ∗ is an 𝜖-Stackelberg solution if and only if x ∗∈ X and φ(x ∗) ≤ v 1 + 𝜖.△
A pair (x ∗, y ∗) is an 𝜖-Stackelberg equilibrium pair if and only if x ∗ is an 𝜖-Stackelberg solution and y ∗∈ Ψ(x ∗).△
In addition to the above definitions of pessimistic bilevel optimization, Alves and Antunes[8] presented and illustrated a definition of the solution of pessimistic semivectorial bilevel problem (14.2.3), and made comparison of optimistic solution, pessimistic solution, deceiving solution and rewarding solution.
14.4 Properties of Pessimistic Bilevel Optimization
According to the definitions in Sect. 14.3, we categorize the various properties of pessimistic bilevel optimization, and then list some of the well-known properties.
14.4.1 Existence of Solutions
As is well-known, the study of existence of solutions for pessimistic bilevel optimization is a difficult task. An initial step in this direction was developed by Lucchetti et al. [43] who proposed some examples that fail to have a solution. Moreover, the most studies have been devoted to the weak Stackelberg problem. Aboussoror and Loridan [3], Aboussoror [1] have given sufficient conditions to obtain the existence of solutions to the weak Stackelberg problem via a regularized scheme. Any accumulation point of a sequence of regularized solutions is a solution to the weak Stackelberg problem. Aboussoror and Mansouri [5] have deduced the existence of solutions to the weak Stackelberg problem via d.c. problems. Similar results using reverse convex and convex maximization problems are given in Aboussoror et al. [6].
Loridan and Morgan [35] have obtained some results for approximate solutions of the weak Stackelberg problem by using a theoretical approximation scheme. Any accumulation point of a sequence of 𝜖-approximate solutions of the approximation problems is an 𝜖-approximate solution to the weak Stackelberg problem. The interested reader can refer to the references about the approximate solutions of the weak Stackelberg problem. Furthermore, Loridan and Morgan [36] improved and extended some properties proposed in Ref. [35]. Similar results using approximation scheme were given in Ref. [37].
Lignola and Morgan [28] discussed a more general weak Stackelberg formulation in which the follower’s rational reaction set Ψ(x) is replaced by a parameterized constraint Ψ(t, x) (t is a parameter). Marhfour [45] have established existence and stability results for 𝜖-mixed solutions of weak Stackelberg problems. In particular, the results are given under general assumptions of minimal character without any convexity assumption.
For the pessimistic linear bilevel programming problems, using the strong dual theorem of linear programming and penalty method, Aboussoror and Mansouri [4] have established the existence results of solutions. Note that, the strong-weak Stackelberg problems have been reduced the weak Stackelberg problem under some conditions. Aboussoror and Loridan [2] have studied the existence of solutions of strong-weak Stackelberg problems. In particular, using the regularization and the notion of variational convergence, Aboussoror [7] have given sufficient conditions to ensure the existence of solutions to such problems. The obtained results in Refs. [2] and [7] can be applied to the weak Stackelberg problem by deleting some variables.
When the pessimistic bilevel optimization problem (14.2.2) does not have a solution which may arise even under strong assumptions, the leader can make do with alternative solution concepts. Based on this, Lignola and Morgan [29] have considered a concept of viscosity solution which can obviate the lack of optimal solutions. In particular, they have given sufficient conditions using regularization families of the solutions map to the lower level problem, ensuring the existence of the corresponding viscosity solutions. More recently, Lignola and Morgan [30] continue this research by considering a new inner regularizations of the lower level problem which not necessarily satisfy the constraints, and by deriving an existence result of related viscosity solutions to pessimistic bilevel optimization.
14.4.2 Optimality Conditions and Complexity
The optimality conditions for pessimistic bilevel optimization and pessimistic semivectorial bilevel optimization have been proposed in the literature. A first attempt was made by Dassanayaka [19] using implicit programming approach, minmax programming approach and duality programming approach respectively. Using the advanced tools of variational analysis and generalized differentiation, Dempe [22] derived several types of necessary optimality conditions via the lower-level value function approach and the Karush-Kuhn-Tucker (KKT) representation of lower-level optimal solution maps. Furthermore, the upper subdifferential necessary optimality conditions are obtained, and the links are also established between the necessary optimality conditions of the pessimistic and optimistic versions in bilevel programming. Using the two-level value function approach, Dempe et al. [24] derived the results on sensitivity analysis and necessary optimality conditions of pessimistic bilevel optimization with nonsmooth data.
In addition, Liu et al. [31] presented the first order necessary optimality conditions of a class of pessimistic semivectorial bilevel optimization. Recently, Lampariello et al. [27] discussed the relations among the perturbed pessimistic bilevel problem, pessimistic bilevel optimization and the two follower game as well as the mathematical program with complementarity constraints with respect to their global minimal points. They also presented the connections between their local minimal points in detail. Aussel and Svensson [9] discussed the relations of global and local solutions of both pessimistic bilevel optimization problem and the associated pessimistic mathematical program with complementarity constraints problem. These various necessary optimality conditions and connections to other optimization problems could be helpful to develop fast algorithms to obtain solutions of pessimistic bilevel optimization.
The computational complexity of pessimistic bilevel optimization is easily confirmed at its simplest version, i.e. pessimistic linear bilevel problem. Under the certain assumptions, Wiesemann et al.[54] obtained that: (a) the independent pessimistic linear bilevel programming problem (14.2.1) can be solved in polynomial time; (b) if m, the number of the lower level decision variables, is constant, the pessimistic linear bilevel programming problem (14.2.1) can be solved in polynomial time. If m is nonconstant, it is strongly NP-hard. These results also imply that a general pessimistic bilevel programming problem is strongly NP-hard if the the number of the lower level decision variables is nonconstant.
14.5 Solution Approaches of Pessimistic Bilevel Optimization
14.5.1 Penalty Method
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}, \boldsymbol{t}, \boldsymbol{u}}&\displaystyle \boldsymbol{c}^{\top}\boldsymbol{x} + \boldsymbol{w}^{\top}\boldsymbol{t} + (\boldsymbol{b}-A\boldsymbol{x})^{\top}\boldsymbol{u}{}\\ &\displaystyle \mbox{s.t.}&\displaystyle -B^{\top}\boldsymbol{u} \le k\boldsymbol{w}-\boldsymbol{d}, \\ &\displaystyle &\displaystyle B\boldsymbol{t}\le k(\boldsymbol{b}-A\boldsymbol{x}),\\ &\displaystyle &\displaystyle \boldsymbol{x}\in X, \boldsymbol{t}, \boldsymbol{u}\ge 0, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ21.png)
Under some assumptions, Aboussoror and Mansouri [4] proved that there exists a k ∗ > 0 such that for all k > k ∗, if (x k, t k, u k) is a sequence of solutions of problem (14.5.1), x k solves problem (14.2.1). Unfortunately, no numerical results were reported.
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}, \boldsymbol{t}, \boldsymbol{u}}&\displaystyle \boldsymbol{c}^{\top}\boldsymbol{x} + k\boldsymbol{w}^{\top}\boldsymbol{y} + (\boldsymbol{b}-A\boldsymbol{x})^{\top}\boldsymbol{u}{}\\ &\displaystyle \mbox{s.t.}&\displaystyle -B^{\top}\boldsymbol{u} \le k\boldsymbol{w}-\boldsymbol{d}, \\ &\displaystyle &\displaystyle B\boldsymbol{y}\le \boldsymbol{b}-A\boldsymbol{x},\\ &\displaystyle &\displaystyle \boldsymbol{x}\in X, \boldsymbol{y}, \boldsymbol{u}\ge 0, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ22.png)
Replacing the lower level problem with its Kuhn-Tucker optimality condition, Liu et al. [33] transformed problem (14.2.1) into another single-level optimization problem, and investigated the relation of the solution of those problems. In addition, Zheng et al. proposed penalty methods to solve uncooperative pessimistic linear bilevel multi-follower programming problem [61] and pessimistic linear bilevel multi-follower programming problem with partially shared variables among followers [62] respectively.
14.5.2 Kth-Best Algorithm
![$$\displaystyle \begin{aligned}W:=\{(\boldsymbol{x},\boldsymbol{y}): \boldsymbol{x}\in X, B\boldsymbol{y}\le \boldsymbol{b}-A\boldsymbol{x}, \boldsymbol{y}\ge 0\}.\end{aligned}$$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equb.png)
Zheng et al. [60] first proposed a modified version of Kth-Best algorithm to the pessimistic linear bilevel optimization problems. After sorting all vertices in ascending order with respect to the value of the upper level objective function, this algorithm selects the first vertex to check if it satisfies the terminate condition, and the current vertex is a solution of problem (14.2.1) if it is yes. Otherwise, the next one will be selected and checked.
14.5.3 Approximation Approach
As is well-known, approximation solution of the independent pessimistic bilevel problems have in fact a long history that dates back at least to papers [28, 37]. For recent surveys and improvements about this topic, see also [15, 29, 30]. These results may provide the reader with an idea to design new algorithms for the pessimistic bilevel programming problems, although the authors in [15, 28–30, 37] did not present the direct algorithms.
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}\in X^{\prime}} \max \limits_{\boldsymbol{y}\in \Psi_{\epsilon}(\boldsymbol{x})}&\displaystyle F(\boldsymbol{x},\boldsymbol{y}).{} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ23.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle f(\boldsymbol{x},\boldsymbol{y}) \\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0. \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ24.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}, \boldsymbol{y}} &\displaystyle F(\boldsymbol{x},\boldsymbol{y})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{x}\in X^{\prime}, \boldsymbol{y}\in R_{\epsilon}(\boldsymbol{x}), \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ25.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \max \limits_{\boldsymbol{w}} &\displaystyle F(\boldsymbol{x},\boldsymbol{w})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{w}\in \Psi_{\epsilon}(\boldsymbol{x}), \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ26.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}} &\displaystyle F(\boldsymbol{x},\boldsymbol{y})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{x}\in X^{\prime}, (\boldsymbol{y},\boldsymbol{z})\in E_{\epsilon}(\boldsymbol{x}), \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ27.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}} &\displaystyle -F(\boldsymbol{x},\boldsymbol{y})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{y}\in Y(\boldsymbol{x}), \\ &\displaystyle &\displaystyle f(\boldsymbol{x},\boldsymbol{y})\le f(\boldsymbol{x},\boldsymbol{z})+ \epsilon, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ28.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{z}} &\displaystyle f(\boldsymbol{x},\boldsymbol{z})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{z}\in Y(\boldsymbol{x}). \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ29.png)
Using the KKT conditions, (MFG𝜖) can be transformed into a single-level Mathematical Program with Complementarity Constraints (MPCC𝜖) problem. Under some assumptions, Lampariello et al. [27] concluded the relations among (PBP𝜖), (SPBP𝜖), (MFG𝜖) and (MPCC𝜖). This would provide the readers with a new way to discuss the perturbed pessimistic bilevel programming problems.
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \min \limits_{\boldsymbol{x}\in X^{\prime}} \sup \limits_{\boldsymbol{y}\in \Psi_{\epsilon}(\boldsymbol{x})}\ F(\boldsymbol{x},\boldsymbol{y}){} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ30.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle f(\boldsymbol{x},\boldsymbol{y})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{y}\in Y, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ31.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} \min \limits_{\boldsymbol{x}\in X^{\prime}} \inf \limits_{y\in \Psi(\boldsymbol{x},\beta,\gamma)}\ F(\boldsymbol{x},\boldsymbol{y}){} \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ32.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{y}}&\displaystyle l(\boldsymbol{x},\boldsymbol{y},\beta)\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{y}\in Y, \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ33.png)
Based on the Molodtsov method, they computed a sequence of solutions of strong Stackelberg problem (14.5.5) and investigated the relations with solutions to the independent pessimistic bilevel problem. Under some assumptions, they have been proved that a sequence of solutions of strong Stackelberg problem convergence to a lower Stackelberg equilibrium for the independent perturbed pessimistic bilevel problem.
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}\in X}&\displaystyle H(\boldsymbol{x}){}\\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0, \forall \boldsymbol{y} \in \mbox{arg}\max \limits_{\boldsymbol{y}^{\prime} \in Y} h(\boldsymbol{x},\boldsymbol{y}^{\prime}). \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ34.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x}\in X^{\prime}}&\displaystyle H(\boldsymbol{x})\\ &\displaystyle \mbox{s.t.}&\displaystyle g(\boldsymbol{x},\boldsymbol{y})\le 0, \forall \boldsymbol{y} \in Y_{\epsilon}(\boldsymbol{x}),\\ &\displaystyle &\displaystyle Y_{\epsilon}(\boldsymbol{x}) = \{\boldsymbol{z}\in Y: h(\boldsymbol{x},\boldsymbol{z})< h(\boldsymbol{x},\boldsymbol{z}^{\prime})+ \epsilon, \forall \boldsymbol{z}^{\prime}\in Y\}, \\ &\displaystyle &\displaystyle \boldsymbol{x}\in X. \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ35.png)
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x},\boldsymbol{z},\lambda}&\displaystyle H(\boldsymbol{x})\\ &\displaystyle \mbox{s.t.}&\displaystyle \lambda(\boldsymbol{y})[h(\boldsymbol{x},\boldsymbol{z})-h(\boldsymbol{x},\boldsymbol{y})+\epsilon] \\ &\displaystyle &\displaystyle + (1-\lambda(\boldsymbol{y}))g(\boldsymbol{x},\boldsymbol{y})\le 0, \forall \boldsymbol{y}\in Y, \\ &\displaystyle &\displaystyle \boldsymbol{x}\in X^{\prime}, \boldsymbol{z}\in Y, \lambda: Y \longrightarrow [0,1], \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ36.png)
In particular, when the lower level corresponds to an equilibrium problem that is represented as a (parametric) variational inequality or, equivalently, a generalized equation, bilevel optimization problem can be called an MPEC (Mathematical Program with Equilibrium Constraints). C̆ervinka et al. [16] proposed a new numerical method, which combines two types of existing codes, a code for derivative-free optimization under box constraints, and a method for solving special parametric MPECs from the interactive system, to compute approximate pessimistic solutions to MPECs.
14.5.4 Reduction Method
![$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \min \limits_{\boldsymbol{x},\bar{\boldsymbol{y}}}\max \limits_{\boldsymbol{y}\in \tilde{Y}(\boldsymbol{x},\bar{\boldsymbol{y}})}&\displaystyle F(\boldsymbol{x},\boldsymbol{y})\\ &\displaystyle \mbox{s.t.}&\displaystyle \boldsymbol{x}\in X,\\ &\displaystyle &\displaystyle g(\boldsymbol{x},\bar{\boldsymbol{y}})\le 0,{}\\ &\displaystyle &\displaystyle \tilde{Y}(\boldsymbol{x},\bar{\boldsymbol{y}}):=\{\boldsymbol{y}: g(\boldsymbol{x},\boldsymbol{y})\le 0, f(\boldsymbol{x},\boldsymbol{y})\le f(\boldsymbol{x},\bar{\boldsymbol{y}})\}. \end{array} \end{aligned} $$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_Equ37.png)
![$$(\boldsymbol {x}^*,\bar {\boldsymbol {y}}^{\prime },\boldsymbol {y}^{\prime })$$](../images/480569_1_En_14_Chapter/480569_1_En_14_Chapter_TeX_IEq8.png)
Moreover, Zheng et al. [64] proposed a reducibility method for a pessimistic linear bilevel programming problem which is reduced to disjoint bilinear programming problems. For a pessimistic quadratic-linear bilevel optimization problem, Malyshev and Strekalovsky [44] reduced it to a series of optimistic bilevel optimization problems and then developed the global and local search algorithms.
In addition to the above several methods, for the pessimistic linear bilevel programming problem, Dempe et al. [23] proposed the global and local search algorithms via the value method and the strong dual theory of linear programming; Liu et al. [34] presented a new algorithm which embeds a penalty method into a branch and bound algorithm. For the pessimistic bilevel mixed-integer programming problems, Lozano and Smith [39] developed two methods (i.e. two-phase approach and cutting-plane algorithm) based on an optimal-value-function reformulation. Zheng et al. [63] presented a maximum entropy approach to solve a class of the pessimistic bilevel programming problems in which the set of solutions of the lower level problem is discrete. It should be noted that their approach need to obtain the set of solutions of the lower level problem in advance. This is not an easy work, but it may provide the reader with a new way to discuss the pessimistic bilevel optimization.
14.6 Conclusions and Prospective Research Topics
- 1.
The first order optimality conditions for pessimistic bilevel optimization is proposed, but it has not been organically combined with the algorithm. Furthermore, the higher order optimality conditions also should be studied. In addition, no systematic studies have been conducted on sensitivity analysis.
- 2.
Several existing methods can be used to solve pessimistic linear bilevel optimization and independent pessimistic bilevel optimization problems. It would be interesting to study a general pessimistic bilevel optimization which do not possess the independence property. In particular, it would be instructive to investigate how pessimistic bilevel optimization can be reduced to a single-level optimization problem and to discuss the relationship between them.
- 3.
The ultimate goal of pessimistic bilevel optimization is to provide strong ability of analysis and decision for the practical problems from the worst-case point of view. In particular, those problems often appear in highly complex and uncertainty environments. This requires further research on how intelligent algorithms can be applied to large-scale pessimistic bilevel optimization in the current age of big data.
This work was partially supported by the National Natural Science Foundation of China (Nos. 11871383 and 11501233).