In the usual context of optimization, an instance of a problem consists of establishing an optimal solution from the data that defines the instance. In many practical cases, the problem to be solved arises from a model whose real parameters are not always known with any precision. Properly knowing these parameters thus becomes an important issue for using the model in predictive studies. If the phenomenon studied is subject to the experience and if the variables of the problem are measurable quantities, we obtain experimentally an (observed) solution of the problem that can instruct us about its parameters. Inverse optimization consists exactly of inferring the parameters from a solution. This set of problems [TAR 87] originally involved continuous variable problems, for example in geophysics: wave propagation models define the general form of the problem to be solved. The parameters of the model correspond to characteristics of the subsoil that we cannot (or only with difficulty) measure directly. From explosion experiments, by interpreting the observed reaction of the ground as a result of the model, study of the inverse problem allows us to go back to the system of parameters.
In the context of combinatorial optimization, inverse problems have been considered for a little more than 10 years (see for example [BUR 92]) and have given rise to many studies since the end of the 1990s. Given an optimization or linear problem and a feasible solution, establishing a system of parameters (notably the coefficients of the objective) that makes this solution optimal does not, in general, pose a problem. In particular, often all that is needed is to make the objective zero to guarantee the optimality of a fixed solution and an initial system of parameters. So, the problems that arise most often in this context are those of establishing, given a fixed solution and an initial system of parameters, a smallest modification of the parameters, in the sense of a chosen norm, in order to make the fixed solution optimal. In general, the parameters to be modified are the coefficients of the objective. In this case, every objective vector that makes the fixed solution optimal is a feasible solution of the inverse problem; the deviation (distance) relative to the initial vector then measures the value of this feasible solution.
From the applications point of view, this problem appears, as in the previous case, in configuring models. In certain cases, the initial vector can be interpreted as a first imprecise or incomplete estimation of the parameters to be estimated and allows us to avoid trivial non-relevant solutions. In the geophysics context, an inverse shortest path problem has been introduced, for example, for predicting the movements of earthquakes [MOS 91]. Geological zones are divided into squares of a certain number of cells represented by vertices. The adjacency relations are modeled by arcs in the corresponding network and the costs of these arcs represent the propagation times. Although certain transmission times are known, the precise values are difficult to obtain. By monitoring an earthquake and the arrival time of the seismic disturbances at different sensors, and assuming that the waves take the shortest paths, the problem that consists of refining the estimations of the transmission times between cells corresponds to an inverse shortest path problem.
Let us also mention the example of localizing a center that we study in section 17.4.2. In its normal form, this problem consists of finding a location for an operations center in a network that minimizes its greatest distance to the other vertices of the network (and therefore the maximum intervention time). For example, this problem typically arises for “ideally” locating a hospital or a fire station. The associated inverse problem consists of modifying as little as possible the distances of the network (or even its structure) in such a way that a location given in advance becomes an optimal center. Let us imagine that the hospital in a town is no longer the optimal center of the network, because of successive changes to the urban fabric; in the context of a renovation plan for the town, an alternative to building a new hospital is finding the minimal investment that allows the existing hospital to become the optimal center once more.
Another class of potential applications concerns pricing or incentive problems: how can we modify the costs or the structure of a network in such a way that the flow of travelers follows an ideal solution (from the security, environmental, etc. point of view) or at least comes close to it? Let us lastly mention transitions management: a system governed by an optimization problem uses a solution from a long time ago. If changes to this system are ordered by decisions that are exogenous to the system, the problem is about establishing minimal modifications of the system that allow us to make the new solution optimal.
Another natural question consists of imposing the restriction that the new solution should not be optimal, but at least no more expensive in the new configuration than the previous solution was in the old configuration. We then define a variant of inverse problems for which we choose the optimal value and not a solution. These different fields of application have inspired many generalizations of inverse problems. In certain cases, the parameters to be modified concern the constraints rather than the objective; we will see an example of this concerning a maximum flow problem. In many situations, it proves useful to impose constraints that limit the possibilities of changing the parameters; we systematically include this possibility in the formalism defined at the end of this introduction. In this chapter we consider in particular inverse problems for which the parameters can only take integer values, or even just the value 0 or 1. This question has been considered in [CHU 08a, CHU 08b, CHU 10]. This seems to be of interest notably for modeling situations that do not concern the definition of new parameters of the system but rather a new structure. Finally, we evoke other generalizations that open up large areas to be explored.
The set of inverse problems in combinatorial optimization is relatively young and has mostly been studied, up until now, for specific problems. The first question that arises, for the study of a new problem from this perspective, is to characterize its difficulty. This is why we have chosen to divide this chapter into two parts: firstly, we evoke polynomial inverse problems and discuss the solution methods associated with them, and, secondly, we discuss various hard inverse problems. This presentation also allows us to tackle the general question of comparing, from an algorithmic complexity point of view, the initial problem and its inverse versions.
The aim of this chapter is not to reference all the results currently known, but rather to show a wide sample of results. We have selected relatively simple results that illustrate the different kinds of known problems, results, and techniques. We also pay close attention to new fields of study that seem to us to be promising.
Let there be an optimization problem πC of m, each of whose instances I = (c,
, f), where c ∈
⊆
,
⊆
m and f:
ℓ ×
m →
, consists of solving min{f(c, x)| x ∈
}. The vector x represents the variables of πC and the set
describes the area of the feasible solutions. Lastly, the function f associates a value with each solution and each parameter; it is commonly called the objective function. Under general hypotheses, an optimal solution x* of πC exists for each instance; this is notably the case when
is a compact of
m and f is a continuous function. Often, in the combinatorial optimization context, the set
is a finite set for each instance I and an optimal solution exists for every objective function.
The vector c models the parameters of the instance that will be the variables of the inverse problem. Often, ℓ = m and c correspond to the coefficients of the objective. The set expresses the constraints on these parameters. We indicate by
the Lp norm: for two vectors x = (x1,…, xm) and y = (y1,…, ym), their distance under this norm is defined by:
for
for p = ∞.
Given x* ∈ , the inverse context consists of knowing whether a vector of parameters c* ∈
exists such that x* is an optimal solution of πC. Under these general hypotheses, we see that such a vector c* exists for each x* ∈
. We then look more closely at finding the best possible vectors c*, that is those that differ as little as possible from c; this is precisely the problem of inverse combinatorial optimization problems. Formally, the inverse problem associated with πC, expressed as INVπC, is defined relative to the norm Lp by: given an instance I = (c,
, f) of πC and a solution x* ∈
, establish a vector of parameters c* ∈
that minimizes the quantity || c* − c ||p such that x* is an optimal solution of πC for the instance I = (c*,
, f) (that is f(c*, x*) = min{f(c*,x)| x ∈
}).
The choice of the norm used will often depend on the context of the underlying application. For example, for the problem described in the introduction [BUR 92], model and discuss the inverse shortest path problem under the norm L2 that is used in physics in energy dispersion problems. Other authors examine this problem under other norms, for example [AHU 01] for the L1 norm or [ZHA 99] for the L∞ norm. In order to keep things simple and concise, we will only present results under the L1 norm in this chapter. So, without any information to the contrary, it should be understood that the study concerns an inverse combinatorial optimization problem under the L1 norm.
According to a more common terminology, the term combinatorial optimization covers mainly discrete (or even generally finite discrete) optimization problems, as well as linear problems for which the set of basic solutions makes up a discrete underlying structure. The problem described here could go well beyond this context; nevertheless, to remain consistent with the common terminology, the term inverse combinatorial optimization systematically refers to the context that we have just described (INVπC problem), which stands out from other inverse problems currently studied in numerical analysis. In this chapter, to simplify the vocabulary when there is no possible ambiguity, the term inverse problem will systematically refer to inverse combinatorial optimization.
Lastly, the dependence of π relative to will often be omitted; however, for the inverse problem, we will mention it systematically since
represents the constraints of the variables of the inverse problem. If
is not specified, the INVπ problem refers generically to the set of inverse problems associated with the various conceivable sets
. By convention, when
= A1 ×…× An, and if all the sets Ai are equipotent, the constraints are said to be balanced (k-balanced if the cardinality of the Ai is finite and equal to k, k-valued if, moreover, A1 =…= An). Lastly, when A1 =…= An, we talk about continuous, continuous and positive, integer, and bounded variable inverse problems, respectively, when A1 =
, A1 =
+, A1 =
, and. A1 = [a;b], a ≤ b.
COMMENT 17.1.– This presentation of the inverse combinatorial problem suggests that only the parameters of the objective are to be modified. This is effectively the most common situation that we have chosen to highlight. Nevertheless, for certain problems, it is natural to take constraint parameters as variables of the inverse problem; we illustrate this using the inverse maximum flow problem. All the definitions can immediately be adapted in the case where the parameters to be modified concern the constraints or conjointly the constraints and the objective. The principal difference when the variables of the inverse problem concern constraints is that the fixed solution x0 is not feasible for certain values of the parameters and, notably, may not be feasible for the initial problem. In this case, even the existence of solutions of the inverse problem should to be studied case by case.
In the literature two extensions of inverse optimization problems are presented: partial inverse and evaluative inverse optimization problems1. In the first case, a partial solution for j ∈ J ⊆ {1,…, m} is given instead of the complete solution x*; in other words, only certain components of the vector are imposed, rather than the whole vector. We therefore seek to change the parameters c* ∈
as little as possible in such a way that an optimal solution
of the instance I = (c*,
, f) exists that satisfies
for j ∈ J. For example, for the partial inverse minimum weight spanning tree problem, given an instance I = (G, d) and a forest E0, where G = (V, E) is a connected graph and d is a distance function of E in
+, we seek to find a weight function d* for which the quantity
is minimum among the distance functions d* such that E0 is included in a minimum weight spanning tree T* of (G, d*). In [CAI 97], various studies on partial inverse problems have been proposed. An evaluative inverse problem does not require the solution to become optimal, but to equal a given value. Therefore, given an instance I = (c,
, f) of πC and a value val*, we seek to change the parameters c* as little as possible, in such a way that an optimal solution x* exists on the instance I* = (c*,
, f) that satisfies: f(c*, x*) = val*. Several results concerning this problem have been presented in [ZHA 00]; see also [CHU 10].
The notations and definitions relevant to graph theory used in this chapter are those from [BER 73]; we refer the reader to this work for more details. We also adopt the notation to distinguish the case of a directed graph. An arc will then be referred to by
.
In this section, we propose examples of inverse problems or classes of inverse problems known to be polynomial. The methods used up until now to solve inverse combinatorial problems can be classed into two groups:
1) Methods based on linear programming are still in the majority, at least for the case of the L1 and L∞ norms. After establishing a linear formulation of the inverse problem, the solutions presented in the literature use the revised simplex method [AHU 01, ZHA 96a, ZHA 99]), the ellipsoids method [LIU 03, YAN 99], or column generation methods [ZHA 96b, ZHA 97, HU 98]. Even if the use of these methods for different inverse problems does not always lead to a polynomial algorithm, we mention them here because they form the basis of most of the currently known proofs of polynomial cases.
2) Combinatorial algorithms have been perfected for certain problems. Sometimes this concerns algorithms that result from an explicit solution using linear programming [AHU 01, HU 98, ZHA 96b, ZHA 97]. We present solutions of this kind in section 17.3.1 for a shortest path problem and a cut problem. In other cases, direct combinatorial methods are perfected [ALF 04, YAN 97]; we evoke some of these in sections 17.3.2 and 17.3.4.
For a complete map of the methods used for each problem, see [HEU 04].
Sections 17.3.1 and 17.3.3 give two canonical situations of inverse problems solved by linear programming. In section 17.3.1, we explain why the inverse versions of linear programs are linear programs, by describing a solution and applying it to a shortest path problem as well as to a cut problem. Section 17.3.3 describes a relatively large class of combinatorial problems (that cannot be directly formulated by a linear program), for which an inverse version for the L1 or L∞ norm can be formulated as a linear program. For this class, the inverse problem is polynomial whenever the initial problem is.
After these results based on linear programming, in section 17.3.2, we propose a combinatorial procedure for an inverse maximum flow problem of a special nature, since this involves changing the constraints and not the objective. For different problems, we consider integer variable inverse versions that open up new fields of research. To conclude section 17.3.4), a polynomial case of the inverse 0, 1 variable matching problem obtained by a flow algorithm is presented. In section 17.4, we show that in the general case, this last problem is NP-hard.
In the linear programming case, the vector c ∈ m and f is bilinear: f(c, x) = 〈c, x〉, where
refers to the scalar product of x and y, and
is defined by n linear constraints. The set
can be any polyhedron of
m, but in most natural cases this is
+m (positive weights), of [a, b]m, or even
m when no restriction is imposed.
Our aim being to present the main ideas using simple and meaningful examples, we will restrict ourselves to =
m. Let us consider a linear problem LP for which each instance is formulated in standard form in the following way:
[17.1]
where c ∈ m, b ∈
n, and A is a matrix with n lines and m columns with real coefficients. Of course, the instance is determined by c, b and A, nevertheless, we only highlight the dependency in c in order to be able to refer to a program LP(c′) obtained from LP(c) by replacing c with c′.
LP has x as a variable; to consider its inverse version, we set c as the vector of initial parameters and fix x* as the target solution. This is about establishing an objective vector c* that minimizes ||c* − c||1 such that x* is an optimal solution of LP(c*). To write the inverse problem more formally, we use
to express the variable of this inverse problem, c* then being an optimal solution. Each instance of the inverse version of LP is written:
COMMENT 17.2.– Before taking the analysis of this problem any further, let us note that, as for every inverse problem, an implicit hypothesis is that for every , LP(
) allows at least one feasible solution x*.
Note that 0 is obviously a feasible solution of the inverse problem since x* is an optimal solution of the problem:
We will see in theorem 17.1 that a solution always exists for the inverse problem.
Note that LP() has an optimal solution if and only if its dual DL(
) allows feasible solutions.
is formulated using:
All we need to do therefore is to restrict the search for to the set
. Of course 0 ∈ SLP.
The problem INVLPRm (c, x*) can be formulated as a linear program. One possibility is to express the optimality of x* for the LP() problem by primal-dual constraints. In order to write the complmentarity constraints explicitly, we define the following notations:
In the same way, we will use AI to refer to the matrix extracted from A by selecting the lines in I. We express as π the vector of the dual variables associated with the constraint [17.3.a]; given the complementarity constraints, the non-zero components of π are indexes in I, so only the vector πI remains to be established.
x* is then an optimal solution of if and only if
exist such that:
where λK allows us to express condition [17.4.b] using an equality.
The linearization of the objective of the problem INVLPm is classic: we introduce 2m variables αi, βi, i = 1,…m, all positive, in such a way that
= c + α − β. INVLP
m is then expressed:
In absolute terms, this formulation is sufficient to solve the inverse problem INVLPm, at least in the case with rational coefficients. Nevertheless, study of this program allows us to obtain an analytical solution and deduce explicit solutions for various examples of combinatorial problems that can be formulated as linear programs. We will see two examples of this for a shortest path problem and a minimum capacity cut problem.
Since only variables α and β affect the objective, and as each index j only appears in one of constraints [17.4.a’] and [17.4.b’], at the optimum we obligatorily have βK = 0, which allows us to express the equivalent problem:
In this problem, the different variables do not play the same role. In order to present an analytical solution, the idea is to consider the dual D″ of , then to establish each variable of INVLP
m as a dual variable of a constraint of D″.
By expressing as , j = 1,… m the variables of D″, we obtain (for each constraint, we indicate in brackets the associated variable of
):
This formulation is similar to the initial program. To get even closer to it, we make the change of variable . Note that such a change of variable does not affect the dual. We then obtain, using AIx* = bI and the definitions of the sets J, K:
Note that DINVLP is similar to the initial problem; the difference is that only the constraints of LP saturated by x* appear in DINVLP and that, however, this latter is a bounded variables problem (constraints associated with αJ and βJ). Now, it allows x* as a feasible solution and therefore systematically has an optimal solution; the same goes therefore for its dual.
We are now in a position to propose a solution to the problem . Let y* be a solution of DINVLP and
be the value of the dual variable associated with the constraint AIy ≥ bI in an optimal solution of
. We have:
[17.5]
By stating for every j, simply reading constraints [17.5.a] and [17.5.b], and the objective of
, allows us to establish the optimal values of αj and βj according to the sign of
.
Table 17.1. Discussion of equation [17.5] according to the sign of
We therefore deduce the following values for the optimal solution of INVLP
m under the L1 norm:
We summarize this discussion in theorem 17.1.
THEOREM 17.1.– [AHU 01] Let LP be a problem for which each instance is written
Let x* be a feasible solution; we express and
. So the problemINVLP
m(c, x*) allows a unique solution. To determine the solution, we consider the problem:
Let y* be an optimal solution of DINV LP and be the value of the dual variable associated with the constraint AIy
bI in the dual of DINV LP, and let cπ = c − tAIπ*. The unique solution of
(c, x*) is given by:
Let us make a few observations before using this result for various inverse problems.
COMMENT 17.3.– If we examine the constrained versions INVLPC, where expresses range constraints on the coefficients of
, the inverse problem also has a linear formulation. The supplementary constraints for
are transformed into constraints on α and β in the formulation of INVLP′.
If the linear problem for which we have studied an inverse version is not in the standard form [17.1], we also obtain a linear formulation of the inverse problem. Let us note nevertheless that as long as the equivalent version in standard form has the same variables and the same objective, the above result can be directly applied to this standard form. This is particularly the case for a problem in canonical form. However, if the initial problem does not have a sign constraint, the equivalent version in standard form has more variables (each variable of any sign is broken down into two positive variables), and therefore has an objective vector of greater dimension. The inverse version of this standard form is not equivalent to the initial inverse problem since the coefficients in the objective of the variables derived from the same variable of any sign can vary independently.
It is understood that if the coefficients of must be integer or of Boolean type then the inverse problem becomes an integer variable linear program. We will see that in certain cases such constraints can be taken into account in polynomial time. This is notably the case for a shortest path problem and a minimum capacity cut problem. However, for other problems, this restriction becomes hard.
COMMENT 17.4.– Let us assume that LP is the relaxation of a 0–1 linear program and that x* is a solution. In this case, LP is of the form:
By stating , we apply the previous method. J corresponds to the indexes for which
, that is for which the constraint x
1 is saturated. The problem DINV LP is therefore:
where 1J and 2J correspond to the vectors of dimension |J| whose coordinates are respectively all equal to 1 and 2. The constraint yJ 2J is redundant and can therefore be removed. If all the constraints Ax
b are saturated by x*, this problems corresponds exactly to LP.
Let be a directed arc-valued graph; we express by
the value (or cost) of the arc
and
. Let s and t be two specified points of V. We examine an inverse version of a path from s to t of minimum value. We therefore consider a path from s to t,
, seen as a set of arcs. We use
to refer to the characteristic vector associated with this path
. The inverse problem associated with this path consists of modifying the vector of costs to make Ps,t optimal. More formally, in the case of the L1 norm, the problem is expressed:
Note that the inverse context imposes a priori the existence of a path from s to t. We will further assume that no path from s to t crosses a directed cycle of negative value. In these conditions, the minimum cost path problem from s to t has solutions and can therefore be formulated using the following linear program:
[17.7]
A classical result is that if there is no directed cycle of negative value, this problem has optimal solutions with 0, 1 components that correspond to an optimal path. This can be shown by considering the dual problem, that is to find the potential of optimal paths (assigning to each vertex x the value of an optimal path from s to x) from s that satisfy the usual optimality conditions for minimum value path problems. By assigning to the arcs of an optimal path (that exists under the hypotheses we have made) the value 1 and to the other arcs the value 0, the arcs of this path saturate the dual constraints: the difference of potential is exactly equal to the cost of the arc. All the constraints of the primal [17.7] being saturated, the complementarity constraints are satisfied, which proves the optimality of the proposed solution.
A direct corollary is that we can add bound constraints on the variables without changing the value of the problem:
COMMENT 17.5.– The problem with bounded variables obtained by adding the constraints xij dij with dij
1in the program [17.7] also allows an optimal path as a solution.
To apply theorem 17.1, any equality constraint is considered as the conjunction of two inequalities. Let us note that every feasible solution saturates all the constraints, therefore I = {1,…, n}. Moreover, the set J corresponds with the arcs of the path Ps,t. The problem D therefore corresponds to the problem [17.7] with bound constraints on the variables: xij 1 or xij
2. According to comment 17.5, this is precisely the problem of the path of minimum value from s to t. We express by π+ and π− the dual variables associated with the constraints Ax
1, −1, 0 and −Ax
−1, 1, 0, respectively. An optimal solution of the dual is such that π* = (π+)* − (π−)* corresponds to the potential of optimal paths from s. For an arc (i, j), we have
. For every arc, this value is positive or zero, so, from theorem 17.1, we deduce theorem 17.2:
THEOREM 17.2– [AHU 01] Let be a directed arc-valued graph, s and t two vertices, and
a path from s to t. Let |V| = n and
. So a solution of the inverse problem INVMINPATH
m associated with c and Ps,t for the L1 norm can be established in the following way: we establish π* the optimal potential of paths from s:
To justify this result fully, note that the transformation of the costs does not create a directed cycle of negative value. Indeed, the potential π* satisfies, for every arc (i, j), . By summing the inequalities on a directed cycle, we show that its cost after transformation is non-negative.
Finally, note that this solution of the inverse problem does not modify the value of the optimal potentials from s.
Note, moreover, that if all the costs of the arcs are integer then it is the same for the potential of optimal paths and for the solution established by theorem 17.2, which leads to corollary 17.1:
COROLLARY 17.1.– If the costs of the arcs are relative integers then the problem INVMINPATHm allows an integer value optimal solution and therefore the problems INVMINPATH
m and INVMINPATH
m are equivalent.
A second classical example of the use of theorem 17.1 is the solution of an inverse minimum capacity cut problem. For this example, we choose a non-polynomial formulation of the cut problem in a transport network. Given a directed graph and two vertices s and t, we assume that each arc (i, j) has a capacity cij > 0. A cut separating s and t is a set of vertices K ⊂ V that contains s and not t. We associate with K the outcoming cocycle (see [GON 84]), Ω+ (K) = {(i, j) ∈
, i ∈ K and j ∉ K}. The capacity of K is the sum of the capacities of the arcs of the associated cocycle (the lower capacities are zero in this example). The minimum capacity cut (separating s and t) problem, expressed as MINCUT, consists of establishing a cut that separates s and t of minimum capacity. For the inverse version, we take a cut K* that separates s and t and the idea is to modify the capacities in order to make this cut of minimum capacity. Formally, the inverse minimum capacity cut problem, expressed as INVMINCUT, consists of finding a function of capacity c* for which:
(i) the subset of vertices K* given in advance with s ∈ K* and t ∉ K* is a minimum capacity cut of the network (, c*);
(ii) the quantity is minimum.
To formulate INVMINCUT+m using linear programming, we consider C(
, s, t) the set of paths from s to t in
. We then have the following formulation:
[17.8]
For every feasible solution x with 0, 1 components, we express : xij = 1}. If x is minimal (that is ∀y
x, y ≠ x, y is not feasible), a cut K exists that separates s and t such that
(x) = Ω+ (K). Indeed, all we have to do is to take for K the set of vertices that are accessible from s by a path that does not include any arc from
(x). In this case, the capacity of K satisfies c(K) = 〈c, x〉. Conversely, every cut that separates s and t corresponds to a minimal feasible solution with 0, 1 components of the program [17.8].
As in the case of the shortest path problem, the linear program [17.8] allows an optimal solution with 0, 1 components that corresponds to a minimum capacity cut.
To achieve this, we have only to consider the dual of the problem [17.8] that is expressed:
[17.9]
From a maximum compatible flow (non-negative and that satisfies the capacities on each arc) ϕ* from s to t, it is possible to decompose ϕ* as the sum of positive flows ϕ* = ϕ1 +…+ ϕk, where each ϕi is zero apart from on a path Pi from s to t. To construct such a decomposition, we can, for example, observe that if ϕ* is non-zero, there necessarily exists a path P1 from s to t that carries, on each of its arcs, a positive flux. ϕ1’s v(P1) is the smallest flux of the arcs of P1. We then just need to reiterate this process on ϕ* − ϕ1. Another way is to use Ford and Fulkerson’s algorithm, by observing that for each increase along a chain, if the current flow decomposes into ϕ1 +…+ ϕl, the new flow decomposes into . We then define the dual solution yPi = v(ϕi), i = 1,…, k and yP = 0 for every other path. Since ϕ* is compatible, this solution is feasible for the linear program [17.9] and is of value ν(ϕ∗). Consequently, the value of the program [17.8] is ν(ϕ*), that is the minimum capacity of a cut. This proves that a minimum capacity cut corresponds to an optimal solution of the linear program [17.8] with 0, 1 components. We can therefore add constraints of the type xij
lij, with lij
1, to the problem [17.8] without changing the optimal value; the problem also has an optimal solution with 0, 1 components associated with a minimum capacity cut.
Let us now consider the inverse problem INVMINCUT+m associated with a cut K* that separates s and t. Let x* be the characteristic vector of Ω+(K*). Every path from s to t inevitably crosses Ω+(K*) at least once. The paths from s to t that cross Ω+(K*) exactly once correspond to the paths from s to t in the graph
obtained from
by removing the arcs coming into K* (of type (i,j),i ∈ V \ K*, j ∈ K*). Thus, the problem DINV associated with K* is:
According to the previous discussion, there is a Boolean optimal solution of this problem that corresponds to a minimum capacity cut separating s and t in the graph . An optimal solution of the dual is given by a decomposition of a flow
of maximum value from s to t in
. Let π denote the dual variables associated with the constraints
of DINV; the quantity
associated with π for the are (i, j) is exaxtly
, where
is the flow on the arc (i, j). The flow being compatible
. From theorem 17.1, we deduce theorem 17.3:
THEOREM 17.3.– [AHU 01] Let be a transport network, s and t two vertices, and K* ⊂ V a cut that separates s from t. So a solution of the inverse problem INVMINCUT
+m associated with c and K* for the L1 norm can be established in the following way:
1) We construct the graph obtained from
by removing the incoming arcs to K*.
2) We establish a flow of maximum value from s to t in
.
3)
Note that if all the capacities are integer, then Ford and Fulkerson’s algorithm establishes a maximum integral flow ϕ* and, in this case, the solution established by theorem 17.3 is integral, which leads to corollary 17.2:
COROLLARY 17.2.– If all the arcs have an integer cost, INVMINCUT+m has an optimal solution of integral components established by theorem 17.3; there is therefore an equivalence between INVMINCUT
+m and INVMINCUT
+m.
The procedure proposed in section 17.3.1 is directly applicable to the minimum cost flow problem from s to t in a transport network. In its normal version, this problem can indeed be formulated by a linear program of the type:
where b(s) 0, b(t) = −b(s) and b(i) = 0 in any other node. pij is the unit price of flow on the arc (i, j), and cij is the capacity of the arc (i, j). The associated inverse problem consists of adjusting the price to make a fixed flow optimal. This problem comes fully within the scope of theorem 17.1; the solution is very close to that which we presented for the minimum value path problem (see [AHU 01] and [HEU 04] for detailed references), which is a specific case of minimum cost flow. Similarly, several generalizations can be introduced, notably taking lower capacities into account.
In the case of the maximum flow problem MAXFLOW, the inverse problem is different inasmuch as it is not about a weighted problem. This problem has been studied in [YAN 97]; here we propose a combinatorial analysis in the case of a transport network; the general case could be dealt with in an analogous way. Given a transport network that has an origin vertex s and a destination vertex t, and a flow ϕ* from s to t, we need to modify the capacities of the arcs (that is the constraints and not the objective as in the usual inverse context) to make ϕ* optimal, that is a compatible flow respecting capacities and of maximum value. The objective consists of minimizing, in the L1 norm sense, the modification of the system of capacities. We express by the flow of the arc (i, j) and by cij > 0 its capacity. We assume that
for every (i, j). For this type of inverse problem relative to the constraints, there is no particular reason to assume that the fixed solution is feasible. We just assume that ϕ* is positive to remain within the scope of a transport network. For every feasible solution of INVMINFLOW, ϕ* must be a compatible flow in such a way that for every arc (i, j), such that
, we at least need to add
to the capacity of (i, j). Thus, by expressing
, every feasible solution
of the inverse problem satisfies
. This is therefore about minimizing
in such a way that ϕ* is optimal in the network that has capacities
. The initial problem therefore comes down to the inverse problem for the network that has the capacities c + v, a network for which ϕ* is feasible. Moreover, if K is a minimum capacity cut in the graph that has the capacities c* (optimal solution of the inverse problem), according to the maximum flow and minimum capacity cut theorem (see, for example, [GON 84]) the arcs (i, j) leaving K satisfy
in such a way that
. By considering the graph
that has the capacities
(we express the graph obtained in this way by
), ||c* − (ν + c)||1 is at least equal to the capacity of K, therefore at least equal to the minimum capacity of a cut that separates s and t in
. Conversely, let us consider K* a minimum capacity cut in
, and by stating
for every arc (i, j) leaving from K* and
for every other arc,
is exactly capacity of K* in
; moreover, since ϕ* saturates all the arcs leaving from K*, ϕ* is a flow from s to t of maximum value in
. We therefore have
, which leads to theorem 17.4:
THEOREM 17.4.– Let be a transport network (c
0 is the system of capacities, s the origin vertex, and t the destination vertex) and ϕ* be a positive flow. So a solution c* of the inverse problem for the L1 norm associated with R and ϕ* can be established in the following way:
– Construct the graph by assigning to every arc (i, j) the capacity
.
– Establish K*, a minimum capacity cut that separates s and t in
–
A result of the same kind can be obtained on a general network [YAN 97]. The case of l∞-norm is also investigated in [DEA 08], and the inverse minimum flow is studied in [CIU 07].
COMMENT 17.6.– Note that if c and ϕ* are of integral value, the established solution is in integer components and is therefore a solution of INVMAXFLOWm. If ϕ* is not of integer components (but c is), the problem INVMAXFLOW
m is still relevant. We start by stating
. We then construct
by assigning to every arc (i, j) the capacity
; we then need to establish in
a cut K* of minimum capacity among the cuts whose outgoing arcs carry an integer flow. In
, we replace every non-integer capacity with +∞. If the resulting graph
, has a minimum capacity cut of finite capacity, we obtain an optimal solution for INVMAXFLOW
m by reducing to
the capacities c′ on the outgoing arcs of such a cut; otherwise, the problem INVMAXFLOW
m does not have a solution.
Theorem 17.1 provides an existence result and a polynomial solution method for a large class of problems. In general, the problems of this class (linear problems) have an infinity of feasible solutions.
For most combinatorial problems, there is a finite number of feasible solutions. In the case of combinatorial problems that can be formulated as a linear program, such as the minimum value path problem or the minimum capacity cut problem, only certain feasible solutions of the linear problem are solutions of the combinatorial problem (Boolean solutions in the case of the minimum value path problem, for example, and the minimal solutions with 0, 1 components in the case of the cut). The equivalence of the two formulations consists of showing that these conditions can be imposed on the optimum.
Let us consider a combinatorial problem for which each instance has a finite number of solutions such that the objective is linear. Such a problem is therefore expressed:
where || is finite. Note that the majority of combinatorial problems fall within this scope. Given a feasible solution x* ∈
, the inverse problem associated with x* and c for the L1 norm is formulated:
This problem has the following linear formulation:
[17.13]
This is a linear program since is finite. Note that by stating zi = |ci| and
we define a feasible solution of program [17.13]. Moreover, equations [17.13.b] and [17.13.c] mean that the problem is bounded below (z
0). Program [17.13] therefore always has an optimal solution.
To highlight cases where this problem is polynomial, we use the argument of equivalence between optimization and separation [GRÖ 81, KOR 02] that consists of observing that the ellipsoid method does not require the complete formulation of a linear program, but only the knowledge of a separation oracle. The ellipsoid method can be applied to linear programs with rational coefficients. This, in particular, is the case for program [17.13] if the initial objective vector is rational and if ⊂
m, which can be assumed for almost all problems.
The associated separation problem consists, given a couple , of deciding whether it is feasible for the problem [17.13] and, in the negative case, of establishing (u, ν) such that
for every feasible (z′, c′).
If constraint [17.13.b] (or [17.13.c] respectively) is not satisfied, we only need to state ui = νi = 1 and uj = νj = 0, j ≠ i. Lastly, to establish whether constraint [17.13.a] is satisfied, we only need to solve the problem : by expressing as
an optimal solution, if
, the constraint is satisfied; otherwise,
and so
defines a separation hyperplane. In this way, if PC(d) is polynomial for every d ∈
m, the separation problem associated with problem [17.13] is polynomial and so, by the ellipsoid method [GRÖ 81], inverse problem [17.13] is polynomial.
Let us finally note that the separation problem is no more difficult if we integrate bound constraints on the coefficients of . The only problem that may arise is that the constrained inverse problem may not have any solution which will be detected in polynomial time.
THEOREM 17.5.– [AHU 01] Let PC be a combinatorial problem of the type
such that || is finite for each instance, and {c} ∪
⊂
m. If PC is polynomial (the objective c being part of the instance and being able to take every value of
m) then the inverse problem associated with PC is polynomial.
COMMENT 17.7.– Conversely, if the inverse problem is polynomial, a polynomial optimality test is directly available: a feasible solution of x is optimal if and only if the associated instance of the inverse problem has a value of 0. For the majority of known NP-hard combinatorial problems, it is very easy to show that such a test cannot exist if P≠NP. In section 17.4, we give two examples of the application of this argument.
COMMENT 17.8.– Let us also note that if ⊂
m, the resulting program is an integer variable linear program and the previous result no longer applies. We will show in section 17.4 examples for which the inverse problem becomes difficult in this case.
COMMENT 17.9.– Let us lastly note that even if a linear programming problem has an infinity of solutions, we only need to consider the problem that consists of establishing an optimal base solution. Basic solutions are of finite number and establishing an optimal basic solution is polynomial (for a rational program) according to the Khachiyan’s theorem [KHA 79]. By considering the combinatorial problem whose feasible solutions are the fixed solution x* on the one hand and the basic solutions on the other hand, theorem 17.5 allows us to detect the polynomial character of INVLP whatever the initial form of the linear program in as much as there exist extreme vertices. Note that x* may not be a basic solution.
The inverse problems in integer variables open up a vast field of combinatorial problems. We have already seen two examples of this for a shortest path problem and a cut problem. Up until now, almost all the inverse problems studied have been problems in continuous variables: is most often
m or
+m. If the usual inverse context only involves weighted problems (when we need to modify the objective), or at least using real parameters in the expression of an instance (see the case of maximum flow), the inverse context with 0, 1 variables, for its part, is already natural when the original problem is non-weighted. This also allows us to express situations where we modify the structure of the instance to make a predetermined solution optimal. Finally, the case
⊂
m often appears as a restriction of an inverse problem when the coefficients of the objective correspond to discrete quantities. As seen previously, if the original problem has a linear objective and a finite number of feasible solutions or even linear constraints, the case
⊂
m is formulated using an integer variable linear program.
Our last example of a polynomial case concerns the inverse Boolean maximum (or maximum weight) matching problem expressed as INVMWM{0, 1}m. The continuous case INVMWMm is polynomial according to theorem 17.5. We show (see [ALF 04]) in section 17.4 that INVMWM
m is NP-hard in a general graph (in fact the INVMWM{0, 1}m problem is also NP-hard). We also show that an inverse bivalued version of another matching problem (the maximum weight perfect matching) remains hard in bipartite graphs. In this section, we show that INVMWM{0, 1}m becomes polynomial in bipartite graphs when the edges of the fixed matching M* are all valued 1. The proof is based on flow techniques and is not deduced directly from theorem 17.1 as for the case of the shortest path and cut problems.
In a graph G = (V, E) with n vertices, a matching is a set of pairwise non-adjacent edges M ⊆ E; the vertices incident to M are said to be saturated by M, while the others are said to be unsaturated. A matching is maximal when no matching that strictly contains it exists. It is maximum if it is of maximum cardinality (of maximum value in the weighted case). Establishing a maximum matching in a graph can be done in [MIC 80] (weighted problem in O(|V|3), see [GON 84]), and uses the principle of increasing alternating chains. Let us remember that an increasing alternating chain relative to a matching M is a chain that alternates edges of E \ M and of M, whose two extremity vertices are unsaturated by M. A matching is maximum if and only if no increasing alternating chain exists [BER 73]. For the weighted case (MWM), the notion of increasing alternating chains generalizes and the problem remains polynomial, we come back to this in section 17.4.4.
Given a network (G = (V, E), w) and a matching M* of G with |V| = n, |E| = m and w(e) ∈ {0, 1}m, the inverse maximum matching problem in 0, 1 variables INVMWM{0, 1}m, consists of finding a weight function w* such that:
i) The matching M* is of maximum weight in (G, w*).
ii) The quantity is minimum.
iii) w* ∈ {0, 1}m.
Let us mention here an interesting specific case. Let G = (V, E) be a graph and M* a matching. We consider the complete graph with n = |V| vertices (or every graph that contains G as a partial graph), and we define a system of weights w ∈ {0, 1}|V| that represent the edges present in G (w(e) = 1, ∀e ∈ E). The associated inverse Boolean problem corresponds to the case where the edges of the fixed matching (here M*, matching of G) are of weight 1. We now have to modify the instance (add or remove edges) to make M* of maximum cardinality in the modified graph. It is clear in this case that it is never worthwhile modifying an edge value from 0 to 1 in such a way that we can assume w* ≤ w. In the same way, w*(e) = 1, ∀e ∈ M* because reducing the value of an edge of M* reduces from 1 the value of every matching (of which M*) that contains this edge. In these circumstances, the problem consists of establishing a set of edges E* ⊆ E \ M* of minimum cardinality in such a way that M* is a maximum matching in the partial graph G′ = (V, E \ E*). For the rest of this section, we restrict ourselves to this specific case.
THEOREM 17.6.– [ALF 04] The problem INVMWM{0, 1}m (m is the number of edges) for which the fixed matching only contains edges of value 1 is polynomial in bipartite graphs.
Proof. Let there be an instance of INVMWM{0, 1}m made up of a bipartite graph G = (V, E), with V = L ∪ R, |L ∪ R| = m, and of a matching M* on G; we express by V (M*) the vertices unsaturated by M*.
If L ∩ V (M*) = ∅ or R ∩ V (M*) = ∅ then M* is already a maximum matching and E* = ∅. Let us therefore assume L ∩ V (M*) ≠ ∅ and R ∩ V (M*) ≠ ∅; we construct the directed graph in the following way:
if (x,y) ∈ M*, x ∈ R and y ∈ L or (x,y) ∈ E \ M*, x ∈ L and y ∈ R or x = s and y ∈ L ∩ V (M*), or lastly x ∈ R ∩ V (M*) and y = t. Figure 17.1 illustrates this construction (the edges of M* are drawn as a continuous line, and those of E \ M* as a dotted line).
Figure 17.1. Example of the transformation of G into
We can easily verify that an increasing alternating chain of G is associated with every path from s to t in , and conversely.
In this way, by constructing the network with
if e ∈ E and
if
or (y,t), Ford and Fulkerson’s algorithm allows us to establish a maximum flow with 0, 1 components and a minimum capacity cut that separates s from t. Such a cut consists of a set of arcs of minimum cardinality that cuts all the paths from s to t, that is of establishing E*, a set of edges of minimum cardinality in the initial graph that cuts all the increasing alternating chains with respect to the matching M*.
The objective of this section is to highlight certain hard inverse optimization problems. We start by illustrating through two examples (the maximum weight stable and the maximum traveling salesman), how to deduce a hardness result for an inverse problem from a hardness result for the initial problem. Even if a reciprocal of theorem 17.5 does not seem easy to prove, it seems that such results can be established for a number of NP-hard combinatorial problems. Another question raised by theorem 17.5 is whether there are natural polynomial problems with a hard inverse version (with continuous, positive variables). The existence of such a problem remained open for a long time, and was solved in [CAI 99]: the inverse facility location problem with continuous variables is NP-hard, even though the facility location problem is polynomial (and even trivial). The hypothesis of linearity in theorem 17.5 is not satisfied in this case.
We will then examine inverse optimization problems under more restrictive hypotheses, notably partial inverse problems and inverse problems in integer or binary variables. Partial inverse problems have provided the first examples of polynomial problems that give rise to a hard inverse problem. More exactly, we describe (see [YAN 01]) a case for which the partial inverse minimum capacity cut problem is NP-hard, even although we have seen several (non-partial) polynomial versions in section 17.3. The area of inverse problems in integer variables has been relatively little studied until now and would without doubt provide many examples of hard cases. We show that the inverse maximum weight matching problem in integer variables is hard. Note that we proposed a polynomial case in section 17.3.4 and that according to theorem 17.5, different versions in continuous variables are also polynomial.
We have noted, from theorem 17.5, that a problem with a polynomial inverse version has a polynomial optimality test. For the majority of NP-hard problems, we can show that such a test cannot exist, if P≠NP.
The problem of the maximum weight stable set, denoted by MWS, consists, given a simple graph G = (V, E) and a weight function w defined on the vertices of V, of finding a stable set (vertices non-pairwise linked by an edge) S* of G whose sum is maximum. The stability number of G, expressed as α(G), is the maximum cardinality of a stable set (w(ν) = 1 for every vertex ν ∈ V ). MWS is NP-hard [GAR 79]. We state n = |V| and m = |E|.
The inverse maximum weight stable set problem, denoted by INVMWS, consists, given an instance I = (G, S*,w), of modifying the weight of the vertices in such a way that:
i) the stable set S* is of maximum weight in (G, w*);
ii) the quantity is minimum.
In the case of binary variables, we can easily show, by using arguments similar to those expounded in section 17.3.4, that the restriction of the problem INVMWS{0, 1}n for which the fixed stable set S* only contains vertices of value 1 is equivalent to the problem of removing a minimum number of vertices to make a fixed stable set S* optimal.
It is easy to show that a polynomial test of optimality for the maximum cardinal stable set problem would allow us to establish the stability number in polynomial time. Indeed, given a graph G = (V, E), we construct a graph Gk by adding a stable set Sk of size k totally linked to the vertices of V. Sk is optimal in Gk if and only if the stability number of G is less than or equal to k. Therefore, to compute α(G), we only need to consider the instance Gk for each k ∈ {0,…n}, and to state S* = Sk. α(G) is the largest k for which these instances of INVMWSA (A ∈ {n,
+n, {0, 1}n}, are of zero value.
It is also interesting to observe that the maximum stable set problem reduces directly to its inverse version by a reduction that preserves the optimal value. Let G = (V, E) be a simple graph; we construct an instance I = (G′, S*,w) of INVMWSA, A ∈ {n,
+n} in the following way:
1) G′ = (V ′, E′) where V ′ = V ∪ {s} and E′ = E ∪ {[v, s]|v ∈ V};
2) tThe weight function w is given by w(v) = 1 for every v ∈ V and w(s) = 0;
3) S* = {s}.
Let w′ be a feasible solution for the inverse problem. If v0 ∈ V, w′(v0) > w(v0) exists then the solution obtained by reducing the weight of v0 to w(v0) remains feasible and of better value under the L1-norm. We can therefore assume w′(v).., w(v) for every vertex v ∈ V. Let us assume then that there exists v0 ∈ V, w′(v0) = w(v0) − ε with ε > 0. So by stating w″(s) = w′(s) + ε, w″(v0) = w(v0), and w″(v) = w′(v)
otherwise, w″ remains feasible (that is {s} remains a stable set of maximum value) and ||w′ − w||1 = ||w′ − w||1. By reiterating this procedure, we show that for every feasible solution w′ of INVMWSA we can construct, in polynomial time, a feasible solution at least as good such that
and
for v ∈ V. Such a solution is feasible for INVMWSA if and only if
. An optimal solution w∗ of INVMWSA is therefore of value α(G). So we have proposition 17.1:
PROPOSITION 17.1.– [CHU 08a] For every A ∈ {n,
+n, {0, 1}n}, none of the INVMWSA problems are solvable in polynomial time unless P=NP.
In [CHU 08a], many other hardness results are stated for inverse stable set problems.
The traveling salesman problem MAXTSP consists, in a complete edge-weighted graph, of establishing a Hamiltonian cycle (that goes through each vertex once and only once) of maximum value. This problem is NP-hard. It does not seem as easy as in the previous case to reduce this to an inverse version. Nevertheless, we show that, if P≠NP, MAXTSP does not allow a polynomial optimality test, and therefore the inverse problem is non-polynomial. The inverse maximum traveling salesman problem, denoted by INVMAXTSP, consists, given (Kn, d) an instance of MAXTSP and a Hamiltonian cycle C*, of modifying the distances as little as possible in such a way that C* becomes a Hamiltonian cycle of maximum weight in the new graph.
The following problem has been proved to be NP-complete in [PAP 94]: given a graph G = (V, E) and a Hamiltonian cycle C, does a second Hamiltonian cycle exist? Let us consider a graph that is an instance of this problem, let us assign the value 1 to its edges, and let us complete it with edges of zero value in a complete graph. For a given edge e of the known Hamiltonian cycle C, we put the value of e to 0 and we consider the resulting graph Ge and the cycle C as an instance of INVMAXTSPA, for A ∈ {m,
+m, {0, 1}m}, where m = |E|. It is straightforward to verify that C is a cycle of maximum value in Ge if and only if all the Hamiltonian cycles contain e. Therefore a second Hamiltonian cycle exists in G if and only if, for at least one edge e of C, the associated inverse problem is of non-zero value. We therefore have proposition 17.2:
PROPOSITION 17.2.– [ALF 04] If P≠NP, the problem INVMAXTSPA, for A ∈ {m,
+m, {0, 1}m}, does not allow a polynomial time algorithm.
Given a directed graph and a distance function d on
with values in
+, the facility location problem, expressed as 1-FL, consists of finding a vertex s ∈ V (called the network center) in such a way as to minimize the quantity
, where
refers to the value of a shortest path from s to v (when such a path does not exist, we state
. This problem is polynomial because we only need to try each vertex and calculate its value (let us remember that the minimum value shortest path problem is polynomial); it remains so when we seek to locate a fixed number of facilities (see, for example, [CHR 75]). However, the problem becomes NP-hard if the number of facilities is part of the instance [GAR 79].
The inverse facility location problem, expressed as INV1-FL, given a network and a vertex s ∈ V, consists of finding a distance function d* on
with values
+ such that:
i) The vertex s becomes an optimal center of the network .
ii) The quantity is minimum.
In [CAI 99], INV1-FL+m′, where m′ = |E|, is shown to be NP-hard. The proof establishes a polynomial reduction from the satisfiability problem, denoted by SAT. An instance is defined by a set of n Boolean variables X = {x1,…, xn}, and a set of m clauses C = {C1,…,Cm} that involve the variables of X. The negation of a variable
is denoted by xi: it is valued as true when xi is false and false when xi is true; a Boolean variable or its negation is called a literal. A clause is a disjunction of literals and is satisfied by a truth assignment if at least one of its literals is true.
The objective of SAT consists, given an instance (C, X), of deciding whether a truth function f exists, that is a function from X to {true, false}, that satisfies all the clauses of . Note that this problem was the first problem to be shown to be NP-complete.
THEOREM 17.7.– [CAI 99] The problem {iq}, where in continuous variables, is NP-hard.
Proof. Let I = (C, X) with X = {x1,…, xn}, and = {C1,…,Cm} be an instance of SAT. We construct an instance
of
in the following way: the graph INV1-FL
+m′, where
where
, decomposes into three subgraphs
,
and
.
The graph , illustrated in Figure 17.3, contains links between the vertices vi and the vertices xi and
. The arcs drawn as dotted lines are of zero distance while those drawn as continuous lines are of distance 1.
Figure 17.3. The subgraph
The graph is the bipartite graph characteristic of the instance I = (C, X) whose vertices represent the literals xi and
for i = 1,…, n, as well as the clauses Cj for j = 1,…, m; so there exists an arc (li, Cj) if and only if the literal li belongs to the clause Cj. An example is illustrated in Figure 17.4.
Figure 17.4. Example of a subgraph containing the clauses
, and
The graph contains the arcs of initial extremity vn+2 or vn+3: these two vertices are linked with all the other vertices of V by arcs leaving from vn+2 or vn+3 (in particular, the arcs (vn+2, vn+3) and (vn+3, vn+2) exist in
).
The vertex v1 is a center (that is s = v1).
The distance function d is defined by if
where
for i = 1,…, n, and
otherwise.
Every distance function d′ must satisfy for every arc
.
This transformation is carried out in polynomial time and we assert that I is satisfiable if and only if the instance I′ admits a distance function d* of towards
+ such that v1 is a center of (
, d*), and such that the sum of the modifications of the distances is at most n, that is
. First of all, let us observe that the vertices vn+2 and vn+3 are optimal centers of the network (
, d), with vald(vn+2) = vald(vn+3) = 1 (see the network
).
Let us assume that I is satisfiable and let f be a truth function that satisfies all the clauses of ; we modify the weight function d in the following way:
1) ∀i ∈ {1,…, n}, d*(vi, zi) = 0, where zi = xi, if f(xi) = true and if f(xi) = false.
2) for all the other arcs
.
The constraints imposed on the function d* are satisfied and we have vald*(v1) = 1. Let us consider the paths from v1 to vi (or
from v1 to zi respectively) in the network
for i = 2,…, n + 1, described by the sequence
(or
respectively): we have
and
. Furthermore, by construction, there exists for every clause Cj a literal xij (or
) of this clause such that f(xij) = true (or f(xij) = false), since f satisfies all the clauses. In the network
, this results in
and we deduce:
. Finally,
for y = vn+2 or y = vn+3. In this way, since ∀v ∈ V, vald*(v) ≥ 1, v1 is an optimal center of the network
and the sum of the modifications is
.
Conversely, let d* be a modification of the distance function with values in + such that the vertex v1 is an optimal center of the network (
, d*) that satisfies:
Let us then show that we can construct a truth function f that satisfies all the clauses of . Let us first observe that all the elementary paths from v1 to vn+2 (or vn+3) belong to
, and that these are all shortest paths of value
; in this way, since the sum of the modifications is at most n, we deduce that: vald*(v) ≥ 1, v1. Furthermore, we must at least reduce the value of one of these paths by a quantity p1. By construction of the graph
, this amounts to reducing the distance of certain arcs (vi, xi) or
(the other arcs have a distance 0) or of the arc (vn+1, vn+2) (we show later that the distance of this arc must not be modified). Since
, the sum p1 of these losses will satisfy p1 ≥ n + 1 − vald*(v1).
In the same way, since in the network (see the graph
), the modifications allow us to add at most n to this value in
. In other words, vald*(vn+2) ≤ n+ 1. The vertex v1 being an optimal center for d*, we conclude that: vald*(v1) ≤ vald*(vn+2) ≤ n+1. Thus, we must at least increase the value of one of the paths
that satisfies
by a quantity p2. Through the construction of the graph
, each of these paths is reduced to an arc of the form (vn+2, v) for v ∈ V \ {vn+2}. Since vald(vn+2) = 1, the quantity p2 added to the distance of this arc will satisfy p2 ≥ vald*(v1) − 1.
The vertex vn+3 itself also being an optimal center of the network , and the paths
that satisfy
being disjoint from the optimal paths from vn+2, the same reasoning applies as previously. Thus, we must at least add to one of the arcs (vn+3, v) a quantity p3: p3 ≥ vald*(v1) − 1.
Naturally, ; the sum of the last three inequalities provides: vald*(v1) ≤ 1. We deduce from this vald*(v1) = 1; furthermore p1 = n, p2 = 0 and p3 = 0.
Let us now show that d*(vn+1, vn+2) = d(vn+1, vn+2) and d*(vn+1, vn+3) = d(vn+1, vn+3). Let us assume the contrary and let εi ≥ 0 for i = 1, 2, the reduction of the distance of the arc (vn+1, vn+1+i), that is d*(vn+1, vn+1+i) = d(vn+1, vn+1+i) − εi. Without loss of generality, let us assume ε1 > 0. Since the sum of the modifications p1 in is n, this indicates that we can reduce to the maximum of (n − ε) with ε = ε1 + ε2, the distance of a shortest path from v1 to vn+1. In other words,
. The repercussion of the decrease by this distance on the path
will give
, which would contradict vald*(v1) = 1. Thus, only the distances of the arcs of the form (vi, xi) or (vi,
) will be modified. Since their distance is 1, and there are n such arcs on the path
, and p1 = n, we have for every i = 1,…, n either d*(vi, xi) = 0 or d*(vi, xi) = 0. Let us construct the following truth assignment f: f(xi) = true if d*(vi, xi) = 0 and f(xi) = false if d*(vi, xi) = 0. By construction of the graph
, and by using the inequality vald*(v1) = 1, we easily verify that f satisfies all the clauses of
.
We have seen in section 17.3.1.2 that the inverse minimum capacity cut problem in continuous or integer (even non-negative) variables is polynomial. However, the partial problem PINVMINCUT in bounded variables is NP-hard.
THEOREM 17.8.– [YAN 01] The partial problem PINVMINCUT[ai;bi]m in bounded variables is NP-hard.
Proof. The polynomial reduction is conceived from a restriction of the bipartition problem called even bipartition and expressed as EVENBIPART where there is an even number of elements ai. This is about deciding whether there exists S ⊂ {1,…, 2n} such that . This problem is NP-complete. Indeed, the case where the ai are odd numbers is polynomially reduced to the case where they are even numbers. The principle of this reduction is to transform an instance I with 2n+1 elements in 2n instances I1,…, I2n with 2n elements each, where Ij is obtained from I by replacing the elements a1 and aj+1 in the element aj = a1 + aj+1. It is then easy to see that the answer for I is yes if and only if the answer for at least one of the instances Ij is yes, since we can assume without restriction that 2
|S|
2(n − 1).
Therefore let {a1,…, a2n} with be an instance of EVENBIPART. We construct the instance I = (
, U, c) of PINVMINCUT[ai;bi] in the following way:
1) The graph has 3n + 2 vertices with V = {s, t} ∪ U ∪ L ∪ R, where U = {u1,…, un}, L = {l1,…, ln}, R = {r1,…, rn} and
= {(s, ui), (ui, li), (li, t), (ui, ri), (ri, t)|i = 1,…, n}.
2) The capacity function c is given by , and
for i = 1,…, n.
3) Following the logic of the partial inverse problem, we only examine the cuts that separate s and t and that contain the set U.
4) Every capacity function c′ must satisfy (the interval is therefore
and
.
Figure 17.5 illustrates this transformation. We show that there exists a set S* ⊂ {1,…, 2n} with if and only if there exists a cut V* with {s} ∪ U ⊆ V*, t ∉ V* and an optimal capacity function c* that satisfies the integrity constraints such that V* is a minimum capacity cut in the network (
, c*) and the sum of the modifications is at most B.
Let c* be an optimal capacity function; let us observe that every cut V′ that contains s and U can be expressed as V′ = {s} ∪U ∪L0 ∪R0 with L0 ⊆ L and R0 ⊆ R. If li ∈ L0 (or li ∉ V′ respectively), then (li, t) ∈ Ω(V 1) (or (ui, li) ∈ Ω(V′) respectively); we recall that Ω(V′) refers to the outgoing cocycle of V′, that is the set of arcs with initial extremity in V′ and terminal extremity in V \ V′. In the same way, if ri ∈ R0 (or ri ∉ V′ respectively) then (ri, t) ∈ Ω(V′) (or (ui, ri) ∈ Ω(V′) respectively). In fact, we have “ (ui, li) ∈ Ω(V′) ⇔ (li, t) ∉ Ω(V′)” and “ (ui, ri) ∈ Ω(V′) ⇔ (ri, t) ∉ Ω(V′)” and thus |Ω(V′)| = 2n. Furthermore, by
Figure 17.5. Example of the transformation of {a1,…, a2n} into .
stating S = {2i − 1|li ∈ L0} ∪ {2i|ri ∈ R0}, p2i−1 = c(li, t) − c*(li,t) and p2i = c(ri, t) − c*(ri, t) for i = 1,…, n, the capacity of the cut V′ is c*(V ′) =
Let V* be an optimal cut of the network (, c*) that contains s and U; we have
and we deduce:
The constraints on the capacities impose pi ≤ ai and therefore . From the last two inequalities, we deduce
. Moreover, since:
and according to the first inequality, we have is therefore an even bipartition.
Conversely, let S* be a bipartition of {a1,…, a2n}. Let us modify the capacities using if 2i − 1 ∈ S*, c*(ri, t) =
if 2i ∈ S*, and
for all the other arcs. The capacity function c* satisfies the bound constraints and further
. Let us state V* = {s} ∪ U ∪ {li|2i − 1 ∈ S*} ∪ {ri|2i ∈ S*}. We easily verify that V* is an optimal cut in the network (
, c*).
We go back to the matching problem introduced in section 17.3.4. In a graph G = (V, E) with n vertices, a matching M* is said to be perfect if it is of size ; note that a perfect matching is necessarily maximum. When the graph G has a weight function w on its edges (in this case we talk about a network (G, w)), the weight of a matching M is given by
. The problem we then consider is to establish a maximum weight matching, expressed as MWM. When all the weights are 1, this is the maximum matching problem. We are sometimes interested in seeking a maximum weight perfect matching (which does not necessarily coincide with a maximum weight matching) among the perfect matchings of the graph; this restriction will be expressed as MWPM. Let us note that when the graph is bipartite, MWPM is equivalent to the assignment problem. Establishing a matching (perfect or not) of maximum weight is of polynomial complexity and can be done in O(n3) (see [GON 84, PAP 98]). As for the non-weighted case (see section 17.3.4), there are several characterizations of maximum weight matchings. The most famous, by Berge, generalizes the concept of alternated chains or cycles2 to the weighted case and is set out as follows: a matching M* is of maximum weight if and only if for every alternated chain or cycle E′, the reduced cost
is non-negative (see [BER 73]). An alternated chain (or a cycle respectively) E′ whose reduced cost is negative is called an increasing chain (or a cycle respectively) (that is the matching M′ = (M* \ E′) ∪ (E′ \ M*) has a weight larger than M*).
The inverse maximum weight (respectively perfect) matching, denoted by INVMWM (or INVMWPM respectively), given a network (G, w) and a matching (respectively perfect) M*, consists of finding a weight function w* such that:
i) The matching M* is of maximum weight (or among the perfect matchings respectively) in (G, w*).
ii) The quantity is minimum (relative to the set of weight functions for which M* is (respectively perfect) of maximum weight).
Lemma 17.1 is proved in [ALF 04]:
LEMMA 17.1.– [ALF 04] For the problems INVMWMA and INVMWPMA, where A ∈ {m;
+m} with m = |E|, we can reduce in polynomial time every instance to an instance that satisfies the following properties:
i) The matching M* is maximal in G.
ii) Each edge e ∈ E belongs to a chain or cycle that is increasing relative to M* in (G, w).
iii) Every optimal weight function w* satisfies: ∀e ∈ M*,w*(e) ≥ w(e) and ∀e ∈ E \ M*, w*(e) ≤ w(e). Furthermore, an optimal weight function w* exists such that ∀e ∈ E \ M*,w*(e) = w(e).
Proof. We only show these results for INVMWM.
For i): let us assume the contrary and let ei be an edge such that M*∪{ei} remains a matching. In this case, we necessarily have w*(ei) = 0, and ei can be removed from G.
For ii): let e = [vi, vj] ∈ E be an edge that does not belong to any increasing chain or cycle. We can easily prove that for every optimal weight function w* we have w*(e) = w(e). So, if e ∈ M* then we can replace G with the subgraph induced by V \ {vi, vj} and M* with M* \ {e}, while if e ∉ M*, we can remove e from G.
For iii): the first part of the result is obtained without difficulty. Let us assume that the property i) is satisfied and let w* be an optimal weight function for which there exists e′ ∉ M* such that w*(e′) = w(e′) − ε; since M* is maximal, there exists e∗ ∈ M* adjacent to e′. By stating w′(e) = w*(e) for e ∉ {e′, e*}, w′(e′) = w(e) and w′(e*) = w*(e*) + ε, we obtain a new optimal weight function. Indeed let M be any maximal matching: if e* ∈ M (then e′ ∉ M), w′(M) = w*(M) + ε ≤ w*(M*) + ε = w′(M*); otherwise, w′(M) ≤ w*(M) ≤ w*(M*) + ε = w(M*).
In each of the cases i), ii) and iii), by repeating these operations while it is still possible, we obtain the desired result.
COMMENT 17.10.– Note that solving INVMWPMA, where A ∈ {m;
±m}, allows us to solve the problem INVMWMA when the matching M* is perfect. In other words, a polynomial reduction exists from INVMWMA to INVMWPMA. To show this, let us start from an instance I = (G, M*, w) of INVMWMA with M* a perfect matching of G = (V, E); let us then construct the instance I′ = (G′, M*, w′) of INVMWPM, where G′ = (V, E′) is the complete graph from G and the function w′ is defined by w′(e) = w(e) if e ∈ E, w′(e) = 0 otherwise. We observe that a weight function wI* that satisfies lemma 17.1 is optimal for INVMWPMA if and only if its restriction w* to G is optimal for INVMWMA. In other words, M* is a matching of maximum weight in (G, w*) if and only if M* is a matching of maximum weight among the perfect matchings in (G′, w′*). So, since Σe∈E | w(e)−w∗(e) |=Σe∈E′ | w′(e) – w′*(e) |, the result is proven.
We show that INVMWPM with bivalent variables (where each variable i takes two possible values ai and bi) is NP-hard in bipartite graphs while, for these graphs, the most general version INVMWPM+m (see [AHU 01, HUA 99, ZHA 96a]), and a restriction in {0, 1} variables (see section 17.3.4), are polynomial.
THEOREM 17.9.– [ALF 04] The problem INVMWPM{ai,bi}m is NP-hard, even in bipartite graphs.
Proof. The polynomial reduction is carried out from the bipartition problem, known to be NP-complete [GAR 79]; this problem, expressed as BIPART, consists of deciding whether, given n integers a1,…, an whose sum is worth , there exists S ⊂ {1,…, n} such that
. Therefore let {a1,…, an} with
, be an instance of BIPART; we construct an instance I = (G, M*, w) of 2-balanced INVMWPM in the following way:
1) The graph G has 2n vertices and 2n edges and is reduced to a cycle that alternates the edges ei and for i = 1,…, n.
2) The weight function w is given by and
.
3) The matching is ;
4) Every admissible weight function w′ must satisfy and
.
Figure 17.6. The instance I = (G, M*, w)
We can easily verify that G is bipartite and that the weight constraints are satisfied. An example of this transformation is illustrated in Figure 17.6 (the edges in dotted lines are those of M*). We assert that {a1,…, an} contains a bipartition if and only if a weight function w* exists that satisfies the integrity constraints with .
By the construction of the graph G, there are only two perfect matchings: M* and M = {e1,…, en} with w(M*) = 0 and . Let there be a bipartition S ⊂ {1,…, n} of {a1,…, an}; we state
if i ∈ S, w*(e) = w(e) otherwise. We have w*(M) = w(M) = B and w*(M*) = Σi∈S ai = B. In this way, M* is a matching of maximum weight in (G, w∗) and Σe∈E | w*(e) − w(e) |= B.
Conversely, let w* be an optimal weight function that satisfies Σe∈E | w*(e) − w(e) |≤ B. We know that for i = 1,…, n since the integrity constraints must satisfy
. So, by stating
, we have
. On the other hand, M* is a perfect matching of maximum weight and therefore Σi∈S ai = w*(M*) ≥ w*(M) = w(M) = B. By bringing together these two inequalities, we deduce that S is a bipartition.
As for the problem INVMWM, we show below that it is hard for integer variables, while, as we have seen in section 17.3.4, it is polynomial in bipartite graphs when the variables are binary.
THEOREM 17.10.– [ALF 04] The problem INVMWMm is NP-hard, even if M* is a perfect matching and if Δ(G) = 4.
Proof. We establish a polynomial reduction from the problem of covering edges with vertices, more simply called vertex covering and expressed as VC. Given a simple connected graph G = (V, E), this consists of finding a subset V* ⊆ V of vertices of minimum size such that each edge of G has at least one of its extremities in V*. The associated decision problem, given a graph G and an integer k, consists of establishing if a covering exists that satisfies |V*|≤ k. This problem is NP-complete, even in the class of graphs of maximum degree 3 [GAR 79].
Let G = (V, E) be a connected graph of maximum degree 3 where V = {v1,…,vn}, E = {e1,…, em} and k constitute an instance of VC; we polynomially construct an instance I = (H, M*,w) of INVMWM in the following way:
1) The graph H = (V′, E′) has 2n vertices with and
. So H contains G and has n new vertices, as well as n edges defined by:
for i = 1, …, n.
2) The weight function w is given by w(ei) = 3 and .
3) The matching is .
4) Every weight function w′ must satisfy w′(e) ∈ for e ∈ E′.
An example of this transformation is given in Figure 17.7. Let us observe three facts: the graph H has a maximum degree equal to 4 (that is Δ(H) = Δ(G)+1 = 4); M∗ is a perfect matching of H; the alternating chains and cycles all increase M* and are the chains Ee of the form for an arbitrary edge e = [vi, vj] ∈ E, of reduced cost
.
We assert that a vertex covering of G of size at most k exists if and only if a weight function w* with an integer value exists such that M* is a matching of maximum
Figure 17.7. Example of the transformation of G into I = (H, M*,w)
weight in the network (H, w*) and for which the sum of the modifications Σe∈E′ | w∗(e) − w(e) | does not exceed k.
Let V* be a vertex covering with |V*|≤ k. We state if vp ∈ V * and w*(e) = w(e) if not. We easily verify that
since if e = [vi, vj] then at least one of the two edges
or
has a new weight of 2; so, we deduce that M* is a matching of maximum weight in (H,w*). Furthermore, we have Σe∈E′ | w∗(e) − w(e) |=| V* |≤ k.
Conversely, let w* be an optimal weight function that makes the matching M* maximum in (H, w*) and that satisfies Σe∈E′ | w*(e) − w(e) |≤ k. Furthermore, let us assume that w* satisfies lemma 17.1; without loss of generality, we can assume that for i = 1,…, n since we will never increase the weight of an edge by more than the maximum of the reduced costs of the increasing chains containing this edge; similarly, we have w*(ej) = w(ej) for j = 1,…, m. So let
; we easily verify that V* is a covering of the edges by at most k vertices of G. Indeed, if V* is not a covering then there must exist e = [vi, vj] ∈ E with vi ∉ V * and vj ∉ V*; but then,
, and the optimality of M* is contradicted. Lastly, | V* |= Σe∈E′ | w∗(e) − w(e) |≤ k.
In [ALF 04], we show, using a reduction of the same type as the inverse problem, that INVMWM{0,1}m is NP-hard.
COMMENT 17.11.– Note that the previous results reinforce the result from section 17.4.1.1 concerning the difficulty of the inverse maximum stable set INVMWSA, where A ∈ {n, {0, 1}n}. The restrictions of these two problems to the class of line-graphs (see [BER 73]) remain hard.
COMMENT 17.12.– Let us finish the study of inverse matchings by observing that the hardness result also holds for INVMWPMm since a polynomial reduction from the problem MWM
m exists when M* is a perfect matching (see comment 17.11). This constitutes a gap between the difficulty of the versions INVMWPM
m and INVMWPM
m since this last version is polynomial according to [LIU 03].
This chapter aims to be an introduction to inverse combinatorial problems. Rather than making a list of currently known results, our objective has been to select a range that is representative of the different types of problems and results for the case of the L1 norm. This presentation allows us to highlight the stakes and the possibilities of this domain, but it does not, however, show its size, in terms of quantities of work dedicated to each problem, in the existing literature. A very large majority of studies have so far concentrated on the case of continuous variables. Similarly, we have preferred to restrict ourselves to the L1-norm. The case of the L∞ norm gives rise to results and techniques that are fairly similar; works on the other norms are still marginal in the literature and represent a vast unexplored field. The interested reader will find in [HEU 04] a complete panorama of the inverse combinatorial problems in continuous variables already studied, and a summary of the principal results.
Among the possibilities that are still very little explored, we have highlighted a few examples of inverse problems in discrete variables that allow us to model situations where we need to modify the structure of the instance rather than its parameters (see [CHU 10] for more details on this subject).
Besides the study of new problems from this point of view, difficult cases raise the question of approximation. The advantage of the inverse context is that it induces, in a natural way, two approximation problems: the approximation of hard inverse problems and some inverse versions of approximation problems. The first is fairly common: it concerns considering new problems from the approximation point of view. A few early results have been presented in [ALF 04] and [CHU 08b]. However, the second problem is radically different: it concerns modifying the instance (structure or parameters) so that a fixed solution becomes a good solution in the approximation sense. The very first results of this type have been presented in [CHU 10]; at first glance, such problems seem to be fairly hard.
Another problem is that of distinguishing between two objectives: do we make the fixed solution x* optimal or make it unique optimal? Forcing x* to become a unique optimal solution is not always meaningful (or in any case does not always have a solution) in the continuous context when we modify the objective. However, it is more natural for inverse problems in discrete variables.
The last field of investigation that we wish to mention is the notion of inverse problems in relation to a fixed algorithm or a class of algorithms. The idea is to relax the optimality condition for the fixed solution in the modified instance. A way of describing an inverse combinatorial problem is by saying that we wish to modify the instance minimally so that any optimal algorithm may make x* as a solution in the modified instance, or even force it to choose x* when we wish to make x* a unique optimal solution. A natural question is to put the same problem, not against every optimal algorithm, but against a specified algorithm (or a class of algorithms), optimal or not. In [ALF 04, CHU 08a, CHU 08b, CHU 10], this problem is considered for the matching problem, the TSP, and the maximum stable set problem.
Let us also mention the following problem, , which uses the notion of k-optimality. A matching M is said to be k-optimal if every matching obtained from M by removing p edges and adding p + 1 of them, for p
k − 1, is not better than M. Given a graph and a matching M*, k-INVMWM{0,1}m consists of modifying the graph (adding or removing edges) to make M* k-optimal. In the transformed instance, M* can be chosen by every algorithm that establishes a k-optimal solution. We show that this problem is polynomial for k
2 and that, for every k
3, it is APX-complete (for further details on this notion, refer to the chapter on approximation and the one on reductions). A consequence is that it allows a polynomial algorithm that guarantees a constant ratio and does not allow a polynomial approximation schema, if P≠NP. In the same vein, we show that the problem that consists of removing a minimum number of vertices from a graph to make a fixed 2-optimal stable set is NP-hard. Lastly, we consider the inverse maximum stable set problem against the greedy algorithm for the maximum stable set: removing a minimum number of vertices so that a fixed stable set is chosen by the greedy algorithm in the modified graph is also an NP-hard problem.
[AHU 01] AHUJA R. K., ORLIN J. B., “Inverse optimization”, Op. Res., vol. 49, p. 771–783, 2001.
[ALF 04] ALFANDARI L., DEMANGE M., MONNOT J., A general framework for inverse optimization problems, unpublished communication.
[BER 73] BERGE C., Graphs and Hypergraphs, North Holland, Amsterdam, 1973.
[BUR 92] BURTON D., TOINT P. L., “On an instance of the inverse shortest paths problem”, Math. Programming, vol. 53, p. 45–61, 1992.
[CAI 97] CAI M., YANG X., ZHANG J., Inverse problems with partial given solution, Working paper, Department of Mathematics, City University of Hong Kong, 1997.
[CAI 99] CAI M., YANG X., ZHANG J., “The complexity analysis of the inverse center location problem”, J. Global Optim., vol. 15, p. 213–218, 1999.
[CHR 75] CHRISTOFIDES N., Graph Theory: An Algorithmic Approach, Academic Press, New York, 1975.
[CHU 08a] CHUNG Y., DEMANGE M., “The 0-1 inverse maximum stable set problem”, Discrete Applied Mathematics, vol. 156, num. 13, p. 2501–2516, 2008.
[CHU 08b] CHUNG Y., DEMANGE M., “Some inverse traveling salesman problems”, Electronic Notes in Discrete Mathematics, vol. 30, p. 9–14, 2008.
[CHU 10] CHUNG Y., Inverse combinatorial optimization problems and applications, Ph.D. thesis, Department of Computer Science, Paris 1 University, 2009
[CIU 07] CIUREA E., DEACONU A., “Inverse minimum flow problem”, Journal of Applied Mathematics and Computing, vol. 23, p. 193–203, 2007
[DEA 08] DEACONU A., “The inverse maximum flow problem considering l∞-norm”, Rairo-Op. Res., vol. 42, num. 3, p. 401–414, 2008
[GAR 79] GAREY M. R., JOHNSON D. S., Computers and Intractability. A Guide to the Theory of NP-completeness, W. H. Freeman, San Francisco, 1979.
[GON 84] GONDRAN M., MINOUX M., VAJDA S., Graphs and Algorithms, Wiley, New York, 1984.
[GRÖ 81] GRÖTSCHEL M., LOVÁSZ L., SCHRIJVER A., “The ellipsoid method and its consequences in combinatorial optimization”, Combinatorica, vol. 1, p. 169–197, 1981.
[HEU 04] HEUBERGER C., “Inverse optimization: a survey on problems, methods, and results”, J. Comb. Optim., vol. 8, p. 329–361, 2004.
[HU 98] HU Z., LIU Z., “A strongly polynomial algorithm for the inverse shortest arborescence problem”, Discrete Applied Mathematics, vol. 82, p. 135–154, 1998.
[HUA 99] HUANG S., LIU Z., “On the inverse problem of linear programming and its application to minimum weight perfect k-matching”, European Journal of Operational Research, vol. 112, p. 421–426, 1999.
[KHA 79] KHACHIYAN L., “A polynomial algorithm in linear programming”, Soviet Mathematics Doklady, vol. 20, p. 191–194, 1979.
[KOR 02] KORTE B., VYGEN J., Combinatorial Optimization: Theory and Algorithms, Springer, Berlin, 2002.
[LIU 03] LIU Z., ZHANG J., “On inverse problems of optimum perfect matching”, J. Comb. Optim., vol. 7, p. 215–228, 2003.
[MIC 80] MICALI S., VAZIRANI V. V., “An algorithm for finding maximum matching in general graphs”, FOCS, p. 17–27, 1980.
[MOS 91] MOSER T. J., “Shortest paths calculation of seismic rays”, Geophysics, vol. 56, p. 59–67, 1991.
[PAP 94] PAPADIMITRIOU C. H., Computational Complexity, Addison-Wesley, Reading, 1994.
[PAP 98] PAPADIMITRIOU C. H., STEIGLITZ K., Combinatorial Optimization: Algorithms and Complexity, Dover, New York, 1998.
[TAR 87] TARANTOLA A., Inverse Problem Theory: Methods for Data Fitting and Model Parameter Estimation, Elsevier, Amsterdam, 1987.
[YAN 97] YANG C., ZHANG J., MA Z., “Inverse maximum flow and minimum cut problems”, Optimization, vol. 40, p. 147–170, 1997.
[YAN 99] YANG C., ZHANG J., “Two general methods for inverse optimization problems”, Applied Mathematics Letters, vol. 12, p. 69–72, 1999.
[YAN 01] YANG X., “Complexity of partial inverse assignment problem and partial inverse cut problem”, RAIRO Oper. Res., vol. 35, p. 117–126, 2001.
[ZHA 96a] ZHANG J., LIU Z., “Calculating some inverse linear programming problems”, J. Comp. and App. Math., vol. 72, p. 261–273, 1996.
[ZHA 96b] ZHANG J., LIU Z., MA Z., “On inverse problem of minimum spanning tree with partition constraints”, Mathematical Methods of Op. Res., vol. 44, p. 347–358, 1996.
[ZHA 97] ZHANG J., XU S., MA Z., “An algorithm for inverse minimum spanning tree problem”, Optimization Methods and Software, vol. 8, p. 69–84, 1997.
[ZHA 99] ZHANG J., LIU Z., “A further study on inverse linear programming problems”, J. Comp. and App. Math., vol. 106, p. 345–359, 1999.
[ZHA 00] ZHANG J., LIU Z., MA Z., “Some reverse location problems”, European J. Oper. Res., vol. 124, p. 77–88, 2000.
1 Chapter written by Marc DEMANGE and Jérôme MONNOT.
1. Sometimes called reverse problems.
2. An alternated chain (or a cycle) relative to a matching M is a chain (or a cycle respectively) that alternates edges of M and of E \M; in the case of a chain, each one of its extremities must either be an edge of M, or an edge of E \ M incident to a vertex unsaturated by M.