Chapter 9

Interaction Systems and Quantum Models

This final chapter investigates game-theoretic systems making use of the algebra of complex numbers. Not only cooperation models are generalized, but also interaction of pairs on elements of a set X finds an appropriate setting. The states are naturally represented as hermitian matrices with complex coefficients. This representation allows one to carry out standard spectral analysis for interaction systems and provides a link to the corresponding mathematical model of quantum systems in physics.

While the analysis could be extended to general HILBERT spaces, X is assumed to be finite to keep the discussion straigth forward.a

It is historically perhaps surprising that JOHN VON NEUMANN, who laid out the mathematical foundations of quantum theory,b did not build game theory on the same mathematics in his work with OSKAR MORGENSTERN.

_________________

aSee also FAIGLE and GRABISCH [12]

b VON NEUMANN [33]

1.Algebraic preliminaries

Since matrix algebra is the main tool in our analysis, we review some more fundamental notions from linear algebra (and Chapter 2). Further details and proofs can be found in any decent book on linear algebra.1

Where X = {x1, . . . , xm} and Y = [y1, . . . , yn} are two finite index sets, recall that ℝX×Y denotes the real vector space of all matrices A with rows indexed by X, columns indexed by Y, and coefficients Axy ∈ ℝ.

The transpose of A ∈ ℝX×X is the matrix AT ∈ ℝY×X with the coefficients The map AAt establishes an isomorphism between the vector spaces ℝX×Y and ℝY×X.

Viewing A ∈ ℝX×Y and B ∈ ℝY×X as mn-dimensional parameter vectors, we have the usual euclidian inner product as

where tr C denotes the trace of a matrix C. In the case 〈A|B} = 0, A and B are said to be orthogonal. The associated euclidian norm is

We think of a vector ν ∈ ℝX typically as a column vector. νT is the row vector with the same coordinates Be aware of the difference between the two matrix products:

1.1.Symmetry decomposition

Assuming identical index sets

a matrix A ∈ ℝX×X is symmetric if AT = A. In the case AT = –A, the matrix A is skew-symmetric. With an arbitrary matrix A ∈ ℝX×X, we associate the matrices

Notice that A+ is symmetric and A is skew-symmetric. The symmetry decomposition of A is the representation

The matrix A allows exactly one decomposition into a symmetric and a skew-symmetric matrix (see Ex. 9.1). So the symmetry decomposition is unique.

EX. 9.1. Let A, B, C ∈ ℝX×X be such that A = B + C. Show that the two statements are equivalent:

(1)B is symmetric and C is skew-symmetric.

(2)B = A+ and C = A.

Notice that symmetric and skew-symmetric matrices are necessarily pairwise orthogonal (see Ex. 9.2).

EX. 9.2. Let A be a symmetric and B a skew-symmetric matrix. Show:

2.Complex matrices

In physics and engineering, complex numbers offer a convenient means to represent orthogonal structures. Applying this idea to the symmetry decomposition, one arrives at so-called hermitian matrices.

Inner products. Recall that a complex number z ∈ ℂ is an expression of the form z = a + ib where a and b are real numbers and i a special “new” number, the imaginary unit, with the property i2 = –1. The squared absolute value of the complex number z = a + ib is

with being the conjugate of z. More generally, we define the hermitian product of two complex numbers u, υ ∈ ℂ as the complex number

The (hermitian) inner product of two vectors u, υ ∈ ℂX with components ux and υx is the complex number

The length (or norm) of a vector u = a + ib ∈ ℂX (with a, b ∈ ℂX) is

Conjugates and adjoints. The conjugate of a vector υ ∈ ℂX is the vector ∈ ℂX with the conjugated components x. The vector υ* = T is the adjoint of υ. With this notation, the inner product of the column vectors u, υ ∈ ℂX is

where we think of the 1 × 1 matrix υ*u just as a complex number. Accordingly, the adjoint of the matrix C ∈ ℂX×Y is the matrix

EX. 9.3 (Trace). The inner product of the matrices U, V ∈ ℂX×Y is

The matrix C ∈ ℂX×X is selfadjoint if it equals its adjoint, i.e., if

EX. 9.4. Let υ ∈ ℂX be a column vector. Then υυ* ∈ ℂX×X is a selfadjoint matrix of norm ||υυ*|| = ||υ||2.

2.1.Spectral decomposition

If (and only if) the matrix C ∈ ℂX×X has real coefficients,

holds and the notion “selfadjoint” boils down to “symmetric”. It is well-known that real symmetric matrices can be diagonalized. With the same technique, one can extend the diagonalization to general selfadjoint matrices:

THEOREM 9.1 (Spectral Theorem). For a matrix C ∈ ℂX×X the two statements are equivalent:

(1)C = C*.

(2)X admits a unitary basis U = {Ux | xX} of eigenvectors Ux of C with real eigenvalues λx.

Unitary means for the basis U that the vectors Ux have unit norm and are pairwise orthogonal in the sense

The scalar λx is the eigenvalue of the eigenvector Ux of C if

It follows from Theorem 9.1 (see Ex. 9.5) that a selfadjoint matrix C admits a spectral2 decomposition, i.e., a representation in the form

where the Ux are pairwise orthogonal eigenvectors of C with eigenvalues λx ∈ ℝ.

EX. 9.5. Let U = {Ux | xX} be a unitary basis of ℂX together with a set Λ = {λx | xX} a set of arbitrary complex scalars. Show:

(1)The Ux are eigenvectors with eigenvalues λx of the matrix

(2)C is selfadjoint if and only if all the λx are real numbers.

The spectral decomposition shows:

The selfadjoint matrices C inX×X are precisely the linear combinations of matrices of type

where the Ux are (column) vectors inX and the λx are real numbers.

Spectral unity decomposition. As an illustration, consider a unitary matrix U ∈ ℂXX, i.e., a matrix with pairwise orthogonal column vectors Ux of norm ||U||x = 1, which means that the identity matrix I has the representation

The eigenvalues of I have all value λx = 1. Relative to U, the matrix I has the spectral decomposition

For any vector υ ∈ ℂX with norm ||υ|| = 1, we therefore find

It follows that the (squared) absolute values

yield the components of a probability distribution pυ on the set X. More generally, if the selfadjoint matrix C with eigenvalues ρx has the form

then we have for any υ ∈ ℂX,

In other words:

The inner productυ|of the vectors ν and Cν is the expected value of the eigenvalues ρx of C with respect to the probability distribution pν on X.

EX. 9.6 (Standard unity decomposition). The unit vectors ex ∈ ℂX yield the standard unity decomposition

Accordingly, a vector υ ∈ ℂX of length ||υ|| = 1 with the components υx implies the standard probability distribution on X with the components

2.2.Hermitian representation

Coming back to real matrices in the context of symmetry decompositions, let us associate with a real matrix A ∈ ℝXX the complex matrix

is a hermitian3 matrix. The hermitian map A establishes an isomorphism between the vector space ℝX×X and the vector space

with the set ℝ as field of scalars.4 The import in our context is the fundamental observation that the selfadjoint matrices are precisely the hermitian matrices:

LEMMA 9.1. Let C ∈ ℂX×X be an arbitrary complex matrix. Then

Proof. Assume C = A + iB with A, B ∈ ℝX× and hence

So C = C* means symmetry A = AT and skew-symmetry B = −BT. Consequently, one has which yields

The converse is seen as easily.

The remarkable property of the hermitian representation is:

While a real matrix A ∈ ℝXX does not necessarily admit a spectral decomposition with real eigenvalues, its hermitian representation is always guaranteed to have one.

EX. 9.7 (HILBERT space). Let A, B ∈ ℝXX be arbitrary real matrices. Show:

i.e., inner products (and hence norms) are preserved under the hermitian representation. This means that ℝX×X and ℍX are not only isomorphic as real vector spaces but also as (real) HILBERT spaces.

3.Interaction systems

Let us assume that elements x, yX can interact with a certain interaction strength, measured by a real number axy. We denote this interaction symbolically as axyεxy. Graphically, one may equally well think of a weighted (directed) edge in an interaction graph with X as its set of nodes:

An interaction instance is a weighted superposition of interactions:

We record the interaction instance ε in the interaction matrix A ∈ ℝX×X with the interaction coefficients Axy = axy. The interaction is symmetric if AT = A and skew-symmetric if AT = –A.

Conversely, each matrix A ∈ ℝX×X corresponds to some interaction instance

So we may think of ℝX×X as the interaction space relative to the set X. Moreover, the symmetry decomposition

shows:

Every interaction instance ε on X is the superposition

of a symmetric interaction instance ε+ and a skew-symmetric interaction instance ε. Moreover, ε+ and ε are uniquely determined.

Binary interaction. In the binary case |X| = 2, interaction matrices are (2 × 2)-matrices. Consider, for example, the matrices

I is symmetric and Im is skew-symmetric. Superposition of these two matrices with real scalars a, b ∈ ℝ yields the interaction matrix

which is the matrix representation of the complex number z = a + ib.

Note that the binary interaction space is 4-dimensional, however. So the complex numbers only describe a 2-dimensional subclass of binary interactions.

3.1.Interaction states

The norm of an interaction state ε with interaction matrix A is the norm of the associated interaction matrix:

So ||ε|| ≠ 0 means that at least two members s, tX interact with strength Ast ≠ 0 and that the numbers

yield a probability distribution on the set of all possibly interacting pairs and offer a probabilistic perspective on ε:

A pair (x, y) of members of X is interacting nontrivially with probability pxy.

Clearly, scaling ε to λε with a scalar λ ≠ 0, would result in the same probability distribution on X × X. From the probabilistic point of view, it therefore suffices to consider interaction instances ε with norm |ε| = 1.

We thus define:

The interaction system on X is the system (X) with the set of states

In terms of the matrix representation of states, we have

3.2.Interaction potentials

A potential F : X × X → ℝ defines a matrix with coefficients Fxy = F(x, y) and thus a scalar-valued linear functional

on the vector space ℝX×X. Conversely, every linear functional f on ℝX×X is of the form

with uniquely determined coefficients Fxy ∈ ℝ. So potentials and linear functionals correspond to each other.

On the other hand, the potential F defines a linear operator AFA on the space ℝX×X, where the matrix FA is the HADAMARD product of F and A with the coefficients

With this understanding, one has

Moreover, one computes

Summarizing, one finds:

If AX (i.e., if A represents an interaction state εX), the parameters define a probability distribution on X × X. The expected value of the potential F in this state ε is

3.3.Interaction in cooperative games

The interaction model offers a considerably wider context for the analysis of cooperation.

For an illustration, consider a cooperative TU-game Γ = (N, υ) with collection of coalitions. υ is a potential on but not on the set × of possibly pairwise interacting coalitions. However, there is a straightforward extension of υ to × :

Relative to a state with interaction matrix A, the expected value of υ is

In the special case of a state σS where the coalition S interacts with itself with certainty (and hence no proper interaction among other coalitions takes place), we have

which is exactly the potential value of the coalition S in the classical interpretation of Γ.

Generalized cooperative games. A more comprehensive model for the study of cooperation among players would be structures of the type

where υ is a potential on × (rather than just ). This point of view suggests to study game-theoretic cooperation within the context of interaction.

3.4.Interaction in infinite sets

Much of the current interaction analysis remains valid for infinite sets with some modifications.

For example, we admit as descriptions of interaction states only those matrices A ∈ ℝX×X with the property

 

(H1)  supp(A) = {(x, y) ∈ XX | Axy ≠ 0} is finite or countably infinite.
(H2)

If the conditions (H1) and (H2) are met, we factually represent interaction states in HILBERT spaces. To keep things simple, however, we retain the finiteness property of the agent set X in the current text and refer the interested reader to the literature5 for further details.

4.Quantum systems

Without going into the physics of quantum mechanics, let us quickly sketch the basic mathematical model and then look at the relationship with the interaction model. In this context, we think of an observable as a mechanism α that can be applied to a system ,

with the interpretation:

If is in the state σ, then α is expected to produce a measurement result α(σ).

4.1.The quantum model

There are two views on a quantum system relative to a set X. They are dual to each other (reversing the roles of states and observables) but mathematically equivalent.

The SCHRÖDINGER picture. In the so-called SCHRÖDINGER6 picture, the states of are presented as the elements of the set

of complex vectors of norm 1. An observable α corresponds to a (selfadjoint) matrix A ∈ ℍX and produces the real number

when is in the state ν. Recall from the discussion of the spectral decomposition in Section 2.1 that α(ν) is the expected value of the eigenvalues ρx of A relative to the probabilities

where the vectors Ux constitute a vector space basis of corresponding unitary eigenvectors of A.

An interpretation of the probabilities pA,ν could be as follows:

is a stochastic system that shows the element xX with probability if it is observed under A in the state ν:

The elements of X are weighted with the eigenvalues of A. The measurement value is the expected weight of the x produced under A.

EX. 9.8. The identity matrix I ∈ ℂX×X yields the standard probabilities7

The HEISENBERG picture. In the HEISENBERG8 picture of , the selfadjoint matrices A ∈ ℍX take over the role of states while the vectors ν induce measurement results. The HEISENBERG picture is dual9 to the SCHRöDINGER picture. In both pictures, the expected values

are thought to be the numbers resulting from measurements on .

The HEISENBERG picture sees an element xX according to the scheme

with probability .

Densities and wave functions. The difference of the two pictures lies in the interpretation of the probability distribution pA,ν on the index set X relative to A ∈ ℍX and νX.

In the HEISENBERG picture, pA,ν is imagined to be implied by a possibly varying A relative to a fixed state vector ν. Therefore, the elements A ∈ ℍX are also known as density matrices.

In the SCHRÖDINGER picture, the matrix A is considered to be fixed while the state vector ν = ν(t) may vary in time t. ν(t) is called a wave function.

4.2.Evolutions of quantum systems

A quantum evolution

in (discrete) time t depends on a matrix-valued function tMt, a state vector v, and a density A ∈ ℍX. The evolution Φ produces real observation values

Notice that the matrices are selfadjoint. So the evolution Φ can be seen as an evolution of density matrices, which is in accordance with the HEISENBERG picture.

If holds for all t, the evolution Φ can also be interpreted in the SCHRÖDINGER picture as an evolution of state vectors:

REMARK 9.1. The standard model of quantum mechanics assumes that evolutions satisfy the condition Mtν at any time t, so that the HEISENBERG and the SCHRÖDINGER pictures are equivalent.

Markov coalition formation. Let be the collection of coalitions of the set N. The classical view on coalition formation sees the probability distributions p on as the possible states of the formation process and the process itself as a MARKOV10 chain. For example, the METROPOLIS process11 is an example of a MARKOV chain.

To formalize this model, let be the set of all probability distributions on . A MARKOV operator is a linear map

μ defines for every initial state p(0) a so-called MARKOV chain of probability distributions

Define now Pt ∈ ℝN×N as the diagonal matrix with p(t) = μt(p(0)) as its diagonal coefficient vector. Pt is a real symmetric matrix and therefore a density in particular.

Any ν gives rise to a quantum evolution with observed values

For example, if eS ∈ ℝN is the unit vector that corresponds to the coalition S, one has

with the usual interpretation:

If the coalition formation proceeds according to the MARKOV chain (p(0)), then an inspection at time t will find the coalition S to be active with probability

4.3.The quantum perspective on interaction

Recalling the vector space isomorphism via the hermitian representation

we may think of interaction states as manifestations of SCHRÖDINGER states of the quantum system

or as normed representatives of HEISENBERG densities relative to the quantum system

Principal components. An interaction instance A on X has a hermitian spectral decomposition

where the matrices are the principal components of The corresponding interaction instances Ax are the principal components of A:

Principal components V of interaction instances arise from SCHRÖDINGER states υ = a + ibX with a, b ∈ ℝX in the following way. Setting

one has and thus

The principal interaction instance V has hence the underlying structure:

(0)  Each xX has a pair (ax, bx) of weights ax, bx ∈ ℝ.
(1)  The symmetric interaction between two arbitrary elements x, yX is

(2)  The skew-symmetric interaction between two arbitrary elements x, yX is

4.4.The quantum perspective on cooperation

Let N be a (finite) set of players and family of coalitions. From the quantum point view, a (SCHRÖDINGER) state of N is a complex vector υN, which implies the probability distribution pυ with probabilities

on N. In the terminology of fuzzy cooperation, pυ describes a fuzzy coalition:

Player iN is active in state υ with probability

Conversely, if w ∈ ℝN is a non-zero fuzzy coalition with component probabilities 0 ≤ wi ≤ 1, the vector

may be normalized to a SCHRÖDINGER state

In the same way, a vector describes a SCHRÖDINGER state of interaction among the coalitions of N. It is particularly instructive to look at the interactions V of principal component type.

As we have seen in Section 4.3 above, V arises as follows:

(0)  The interaction V on is implied by two cooperative games Γa = (N, a) and Γb = (N, b).
(1) Two coalitions S, T interact symmetrically via

(2) Two coalitions S, T interact skew-symmetrically via

5.Quantum games

A large part of the mathematical analysis of game-theoretic systems follows the guideline:

Represent the system in a mathematical structure, analyze the representation mathematically and re-interpret the result in the original game-theoretic setting.

When one chooses a representation of the system in the same space as the ones usually employed for the representation of a quantum system, one automatically arrives at a “quantum game”, i.e.,at a quantum-theoretic interpretation of a game-theoretic environment.

So we understand by a quantum game any game on a system whose states are represented as quantum states and leave it to the reader to review game theory in this more comprehensive context.

6.Final remarks

Why should one pass to complex numbers and the hermitian space ℍX rather than the euclidian space ℝX×X if both spaces are isomorphic real Hilbert spaces?

The advantage lies in the algebraic structure of the field ℂ of complex numbers, which yields the spectral decomposition (70), for example. It would be not impossible, but somewhat “unnatural” to translate this structural insight back into the environment ℝX×X without appeal to complex algebra.

Another advantage becomes apparent when one studies evolutions of systems over time. In the classical situation of real vector spaces, MARKOV chains are important models for system evolutions. It turns out that this model generalizes considerably when one passes to the context of HILBERT spaces.12

The game-theoretic ramifications of this approach are to a large extent unexplored at this point.

 

 

 

_____________________________

1 e.g., NERING [31].

2 The spectrum of a matrix is, by definition, its set of eigenvalues.

3 C. HERMITE (1822–1901).

4X is not a complex vector space: The product zC of a hermitian matrix C with a complex scalar z is not necessarily hermitian.

5 e.g., HALMOS [23] or WEIDMANN [46].

6 E. SCHRöDINGER (1887–1961).

7 cf. Ex. 9.6.

8 W. HEISENBERG (1901–1976).

9 In the sense of Section 2.1.

10 A. A. MARKOV (1856–1922).

11 cf. Chapter 2.

12 FAIGLE and GIERZ [11].