Optimum Gain Analysis Using the Principle of Game Theory
 
Musa Danjuma SHEHU1 and Sunday A. REJU2

 

1Department of Mathematics and Computer Science, Federal University of Technology

Minna, Nigeria

2 National Open University of Nigeria, Lagos, Victoria Island, Lagos, Nigeria

 

shehumusa_23@yahoo.com, sunnyareju@gmail.com

 

Abstract

The formulation of game problems is one of the most important tools that is being used to solve many practical problems.  For example, organisations need to make decisions about how to locate their branches in different locations for optimum profit. This paper employs the principle of game theory to provide strategies required for optimal location of two competitive organisational branching systems.

Keywords

Pay Off; Discrete; Continuous; Matrix Games; Maximizer; Minimizer.

 

 

Introduction

 

The existence of game theory can be traced back to the days of John Von Neumann in 1928, who was the original developer of the theory [6]. Whenever there is a situation of conflict and competition between two opponents, we refer to the situation as a “game”. The opponents of a game may be individuals, groups of individuals or organizations [3]. The opponents in this situation are usually called “players”. Each player has a number of choices called “Strategies”. These strategies can either be finite or infinite. A player is supposed to choose his strategies without any knowledge of the strategies selected by the other player. The net outcome of all the strategies chosen by all the players may represent a gain or loss or a draw to any particular player.

Game theory is concerned with discrete optimization problems involving two players with conflicting interests. Game problems can be classified into Discrete and Continuous.  Discrete game problems are often represented in matrix forms, which take the form of either n x n or m x n matrix as illustrated below [1], [6]:

 

Table 1. A Typical Game Matrix

Player R

Chooses

Player C Chooses

 

C1

C2

C3

Cn

R1

a11

a12

a13

a1n

R2

a21

a22

a23

a2n

R3

a31

a32

a33

a3n

Rm

am1

am2

am3

amn

 

In a continuous game the choices of R and C are continuous instead of discrete [2].  Therefore there must be a continuous pay-off function G(R,C) instead of a pay-off matrix Gij as explained in discrete games.

We look for a pair of choices

G(Ro,C) ≤ G(Ro,Co) ≤ G(R,Co) for all R, C                                                      (1)

The necessary and sufficient conditions for R°,C° are

∂G/∂R =0, ∂G/∂C =0                                                                                       (2)

2G/∂R2≥0, ∂2G/∂C2 ≤ 0                                                                                  (3)

Any R°, C° satisfying the sufficient condition is called a game-theoretic saddle point [2],[5].

 

 

MINI MAX (MAXMIN) Principle

 

In any game problem, each player is interested in determining his own “optimal” strategy. However because of the conflicting nature of the problem, and because of the lack of information regarding the specific strategies selected by the other player(s), optimality has to be based on a conservative principle [6],[7].

Due to the immense significance of the maxmin (minmax) principle to the focus of this paper, we illustrate this principle using an example as described below:

We consider a two person game shown in Table 2.

 

Table 2. A (3 × 4) Discrete Game Matrix

Player R

Player C

 

C1

C2

C3

C4

R1

G11=6

G12=1

G13=7

G14=3

R2

G21=4

G22=3

G23=5

G24=6

R3

G31=5

G32=1

G33= -2

G34=5

 

If Player R, (the Maximizer) selects his first strategy (R1) he may gain; 6, 1, 7 or 3 depending on the strategy selected by Player C.

Thus player R is guaranteed a gain of at least 1 = min(6,1,7,3) if he selects strategy R1 irrespective of the strategy selected by player C.

Similarly, R is guaranteed a gain of at least

3 = min (4,3,5,6) for strategy R2 selection and

-2 = min (5,1,-2,5) for strategy R3 selection.

Thus for Player R to maximize his gain irrespective of the strategies of C, he has to maximize his minimum gain i.e.

3 = max (1,3,-2)

Similarly, if Player C chooses strategy C1 he loses 6, 4 or 5 depending on the strategy selected by the Player R.

Thus Player C loses no more than

6 = max(6,4,5) for C1 strategy

3 = max(1,3,1) for C2 strategy

7 = max(7,5,-2) for C3 strategy

6 = max(3,6,5) for C4 strategy

Therefore for Player C to minimize his loss, irrespective of Player R, he has to minimize his maximum losses by selecting

3 = min(6,3,7,6) from strategy C4

This is called the minimax value of the game for Player C.

Therefore:

maxmin Gij = 3 = minmax Gij

R C                  C R

(R plays first)                (C plays first)

Methodology

 

Game theory can be used to solve problems in a situation of conflict and competition between two or more opponents. The approach here is the consideration of opponents in terms of organizations under particular feasibility studies.

We consider two competing banks, one large and one small, planning to open their branches in a city with two business locations (locality 1 and locality 2) ,[4],[6].

If according to a feasibility study about 70% of the population of the city live near the locality 1 and the remaining 30% live near the locality 2,  the two banks estimate that if both are located in the same locality, the larger one will get 60% of the business of the city.  On the other hand if the two banks are located in different localities, the larger one will get 80% of the business of the locality in which it is located, and 40% of the business of the locality in which the smaller one is located.

We employ the principle of game theory in determining the optimal strategies for the two banks with the assumption that the banks have no other competitors in the city.

The pay-off matrix for the game can be set up as follows:

 

Table 3. (2×2 Matrixes) Game Representation

 

 

Smaller Bank (S)

 

 

C1=locality 1

C2=locality 2

Larger Bank (L)

R1=locality 1

a11

a12

R2 = locality 2

a21

a22

 

Let L = larger bank and S = smaller bank

The pay-off aij represents the total percentage of business that the larger bank gets if the larger one is located in locality i and the smaller one in locality j. The elements a11 and a22 correspond to the cases where both banks are located in the same locality. Hence the larger bank will get 60% of the total business of the city i.e

a11 = a22 = 60

If L is located in locality 1 and S in locality 2 then;

L gets 80% of the business of locality 1 (70% population) and 40% of that of locality 2 (30% population) which gives a total of:

80(0.7) + 40(0.3) = 68%

That is,

a12 = 68

Similarly, if L is located in locality 2 and S in locality 1, then L gets 80% of the business of locality 2 (30% population) and 40% of that of locality 1 (70% population) which gives a total of:

80(0.3) + 40(0.7) = 52%

That is,

a21 = 52

The result can be represented in the pay-off matrix of Table 4 as follows;

 

Table 4. (2×2 Matrixes) Game Representation

 

 

Smaller Bank (S)

 

 

C1=locality 1

C2=locality 2

Larger Bank (L)

R1=locality 1

a11 = 60

a12 = 68

R2 = locality 2

a21 = 52

a22 = 60

 

 

Conclusion

 

The larger bank’s optimal strategy is to locate its branch in locality 1 where it gets 8% more business. Similarly, the smaller bank’s optimal strategy is also to locate its branch in locality 1. That is, the smaller bank does this to minimize the larger bank’s business. Then it will get 8% more business (since the larger bank will have less business).

Thus the optimal strategy of both banks is to locate their branches in locality 1. In this case, the larger bank gets 60% of the business of the city whereas the smaller one gets the remaining 40% of the business.

 

 

References

 

1.      Emilio O. R., Modern Optimal Control, Books/Cole Publishing Company, California, 1989.

2.      Hanson Y., Applied Optimal Control, Wiley-Inter Science, New York, 1969.

3.      Jack M., Introduction to Optimal Control, MIR Publishers, Moscow, 1977.

4.      Raifla L. Games and Decisions, Princeton University Press, N.J., 1957.

5.      Straffin Philips D., Game Theory and Strategy, MIT Press Cambridge, 1993.

6.      Rao S. S., Optimization Theory and Application, Second Edition, Willey Eastern Limited, 1984.

7.      Shehu D. M., Optimal Analysis and Application of Discrete Games in Decision Making Environment, In proceedings of SSCE Conference, Minna, Nigeria, 2006.