Multi-Criteria Evaluation (MCE) and Multi-Objective Land Allocation (MOLA) using IDRISI:
The Kathmandu Agriculture and Carpet Industry Suitability Assessment Project




One of the most important applications in GIS is that of decision support. In fact, many
of the analyses performed with other IDRISI modules are intended to support decision making.
The modules within the Decision Support portion of IDRISI are designed to address multi-objective,
multi-criteria resource allocation decision problems, as well as problems of assessing and
incorporating uncertainty in the decision-making process.


Gaining Experience with MCE and MOLA

In the next exercise you will use IDRISI’s Decision Support Module in the resolution of conflicts between competing objectives involving the assessment of land suitability for both agriculture and carpet industry in the Kathmandu Valley. These main two objectives are conflicting since both land uses cannot occupy the same geographic space. The problem can be outlined as follows:

The data for this exercise resides at:


Copy and save this file (approximately 1.2 Mb in size) to a local drive on a machine of your choice (C:drive or D:drive, wherever you can find space), placing them in a directory under your name. Unzip the data in that directory (requiring at least 3.8 Mb plus space for data to be created) and delete the KATMANDU.ZIP file from the local drive. Individually, you will work with these data, following the exercise as detailed in the section provided (see Instructor) from The United Nations Institute for Training and Research (UNITAR) workbook on Explorations in Geographic Information Systems Technology, Volume 4: GIS and Decision Making, compiled by the IDRISI Project at Clark University.

Since this exercise involves some new and advanced concepts in decision support systems, and because it is somewhat lengthy, too, we will conduct it over the next two lab periods – completing Part A involving Multicriteria Weighting and Single Objective Solution in the first, and Part B addressing Multiple Objective Decision Making in the second.

Provide answers to the questions posed in Exercise 11a and 11b, as well as digital copy of the maps FINAL1 and FINAL2.


A Summary of Some of the Decision Support Module and Related Programs

WEIGHT is used to develop a set of relative weights for a group of factors in a multi-criteria evaluation. The weights are developed by providing a series of pairwise comparisons of the relative importance of factors to the suitability of pixels for the activity being evaluated. These pairwise comparisons are then analyzed to produce a set of weights that sum to 1. The weights can then be used with the factors to produce a weighted linear combination using the MCE module (or the SCALAR and OVERLAY modules). The procedure by which the weights are produced follows the logic developed by T. Saaty under the Analytical Hierarchy Process (AHP).

Before WEIGHT can be run, use the EDIT module to create a pairwise comparison file. The comparison file has a ".pcf" extension. The pairwise comparison file contains the lower triangular half of a symmetric matrix of pairwise comparisons. To create this file, first start with a piece of paper and draw a matrix of boxes to hold the comparisons. For example, if three factors were to be weighted, the matrix would be a 3 x 3 matrix. Label each column and row with the name of one of the variables (in the same order across the columns as down the rows). Only the lower-left triangular half would actually be evaluated since the upper right is symmetrically identical.

To rate each pairwise comparison, move from column to column from left to right. Then consider, relative to the variable in that column, the relative importance of the variable in each row (considering only those rows that fall in the lower-left triangular half for that column). Fill in all the comparisons for a column before moving on to the next. Variables are rated according to the following scale:


Note that since the diagonal of the matrix represents the comparison of each variable with itself, these cells should all contain a 1. Note also that you are rating the relative importance of variables in each case. Thus, if two variables were equally of great importance they would receive a rating of 1 just as would two variables that were equally of little importance! Note further that the top-right triangular half could be filled in (if one wished) by taking the reciprocal of the corresponding location in the lower-left triangular half. Thus, for example, if column 2 row 4 had a rating of 3, then row 2 column 4 would be assigned a value of 1/3.

Once the matrix has been completed on paper, it should be entered into the ".pcf" file using EDIT. To do so, the first line should contain a single number indicating the number of variables involved. The lines following this should then contain the names of the variables in the same order as they appear from left to right across the columns (and from top to bottom down the rows). The remaining lines should then contain the lower-left triangular half of the pairwise comparison matrix. Each line should end with a 1 (since this is the rating of the variable against itself along the diagonal). Here is an example for the development of weights for determining the suitability of land for industrial uses:

5
roadprox
townprox
slope
smalhold
parkdist
1
1/3
1
1
4 1
1/7
2 1/7 1
1/2
2 1/2 4 1

Thus in this example, the slope factor was considered to be somewhat more important (a rating of 4) than proximity to the town for the siting of industrial lands. Note that since the variable names can be up to 8 characters in length, it makes sense to use the same names as the file names of the factors themselves. Also note that reciprocal ratings are entered as direct fractions (thus you can enter 1/3 rather than 0.3333).

After the pairwise comparison file has been created, WEIGHT can be run. Simply input the name of the pairwise comparison file, after which it will display the weights. In addition, it will indicate the Consistency Ratio of the matrix. This value indicates the probability that the ratings were randomly assigned. Values less than 0.10 indicate good consistency. When values exceed 0.10, the matrix of weightings should be re-evaluated, and a consistency index matrix will be presented.

This matrix shows how the individual ratings would have to be changed if they were to be perfectly consistent with the best fit weightings achieved. If the overall Consistency Ratio is greater than 0.1, examine this matrix to see the pairwise comparison with the largest deviation. This is the most inconsistent rating. (Remember that the matrix contains a variety of ways in which any pair can be compared. Thus, in addition to a direct rating of variable A to variable B, there are ratings such as A to C and C to B that allow the same kind of comparison. Thus the consistency of ratings can be evaluated.) The deviation noted for this more inconsistent rating indicates how it would need to be changed to be consistent with the best fit weightings. If, for example, it indicated a -2, this would mean that it would need to move 2 points down the scale. This would be equivalent, for example, to decreasing the rating from a 5 to a 3, or equally, from a 1/3 to a 1/5. Thus the deviations noted are in positions along the scale. Fractional positions are possible (thus, if the deviation was +1.8, and the original weighting was 1/5, this indicates that the new rating would need to be 1/(3.2), or 1.8 positions higher on the scale).

Perhaps the best way to re-evaluate any comparison is to generate a new rating without regard for the amount of deviation suggested by the consistency index. Then compare that new rating with the amount of deviation suggested by the index. Then re-rate again. If you do not accept the extent of deviation suggested for a particular comparison, this would indicate that all of the other comparisons with those variables will need to be re-evaluated. Once a new rating has been established, use EDIT to modify the ".pcf" file and run WEIGHT again. The complete weighting scheme will now be modified and a new set of consistency index values generated. Continue in this fashion, re-evaluating the most deviant rating, one rating at a time, until the consistency ratio drops below 0.10. Although it is possible to continue this re-evaluation process until perfect consistency is achieved, there is little appreciable change in the weights once the consistency ratio drops below 0.10. Thus it is usual to stop at this point.

MCE is a decision support tool for Multi-Criteria Evaluation. A decision is a choice between alternatives (such as alternative actions, land allocations, etc.). The basis for a decision is known as a criterion. In a Multi-Criteria Evaluation, an attempt is made to combine a set of criteria to achieve a single composite basis for a decision according to a specific objective. For example, a decision may need to be made about what areas to allocate to industrial development. Criteria might include proximity to roads, slope gradient, exclusion of reserved lands, and so on. Through a Multi-Criteria Evaluation, these factors may be combined to form a single suitability map from which the final choice will be made.

Criteria may be of two types: factors and constraints. Factors are continuous in nature (such as the slope gradient or road proximity factors mentioned above) and are combined by means of a weighted linear combination. Constraints are Boolean in character (such as the reserved lands constraint in the example above) and serve to exclude certain areas from consideration. The MCE procedure starts by multiplying each factor by a weight and then adding the results. Then the constraints are applied by successive multiplication to "zero out" excluded areas.

MCE requires that you specify the number of constraints and the names of the files that will be used as constraints. Constraints should be Boolean maps with zeros in areas that are excluded from consideration and ones elsewhere.

Then specify the number of factors to be used and their respective file names and weights. Factors must be in byte binary format with a standard scaling (i.e., all factors must use the same scaling system). For example, they might all have values that range from 0-255, or 0-99. This scaling can be achieved with STRETCH using the simple linear stretch from minimum to maximum (with 256 levels, for example, to produce values from 0-255). It is important, however, that all factors be standardized to a uniform scale. In addition, the high values must always represent areas that are more suitable. Thus, for example, if low slopes are more important for industrial development than high slopes, the low slopes must have higher values in the standardized scale. This can be done by using STRETCH to create a scale of the correct range and subtracting the result (using OVERLAY) from a uniform image of the maximum possible value (created with INITIAL).

STRETCH rescales image values to fall within a range from the data values (or user-defined) minimum to a user-specified upper limit as preparation for DISPLAY LAUNCHER. STRETCH first requires that you select a stretch type: linear, histogram equalization or linear with saturation.

With a linear stretch, a new image is created by linearly scaling values between a specified minimum and maximum limit. All values greater than or equal to the maximum are given the highest output class value while all those equal to or less than the minimum are given the lowest output class value.

With histogram equalization, the output image is formed such that an equal number of input pixels fall into each ordered class. A histogram of the resulting image will thus appear flat -- hence the name. In theory (i.e., Information Theory), this leads to an image that carries the maximum amount of information for any given number of classes. However, this does not imply that the resulting image is more meaningful. In fact, since the nature of the histogram has been altered, you will have lost one of the more informative characteristics of the image. However, in cases where it is difficult to develop a good visual display, this will usually provide an excellent visual result.

Linear with saturation forces a range of extreme values to all have the same output class. Saturation can be very useful in the preparation of images for visual display, since it concentrates the output values on the less extreme (and more frequently occurring) values. With the linear with saturation option, you can force a specific percentage of the image pixels to take on the highest and lowest class values. Typically, saturating the image by 2.5 - 5% works quite well for visual displays.

Specify the name of the input image and enter a name for the output image. If you are using a linear with saturation stretch, indicate the percent at each end of the scale to be saturated. The default is set to 5%.

Choose whether you want to leave out zero as the background value. If level zero represents a background value rather than a true data value, you can choose to leave it out of the stretch calculations.

If you selected the simple linear stretch, specify the parameters (bounds or scaling points) of the input image if they are other than the minimum and maximum.

Next, specify the value parameters of the output image. The defaults are 256 levels with 0 having the lowest value and 255 having the highest value. Finally, enter a title for the new file and specify the value units.

OVERLAY produces a new image from the data of two input images. New values result from applying one of the nine possible operations to the two input images, referred to as the first and second images during program operation. OVERLAY requires that you input the names of the first and second images as well as a name for the output image. You are then required to choose an overlay option from the list:

Pay careful attention to the relationship between these two images in the overlay option you choose. For example, image 1/image 2 does not yield the same result as image 2/image 1. Image order does not affect functions that are reflexive, i.e., addition and multiplication. It does affect all of the other overlay functions. Finally, enter a title for the output image and the value units.

Once the standardized factor images have been created, you will next need a set of weights that indicate the relative importance of each factor to the activity under consideration. These weights must sum to 1. The module named WEIGHT can be used for this purpose.

Finally, enter a name and title for the output image to be created. After all the criteria (and factor weights) have been entered, the combination process will begin. The final result will be a suitability map in byte binary form with values in the same range as the original standardized factor maps.

RANK rank orders the cells in a byte binary image. Its primary application is in decision making where a specific area (or number of cells) is required that contains the best, or worst, cells according to some index. By ranking cells and then reclassifying the result, a specific number of the best or worst ranks can be determined. MOLA provides a procedure for solving Multi-Objective Land Allocation problems for cases with conflicting objectives. Based on the information from a set of suitability maps, one for each objective, the relative weights to assign to objectives, and the amount of area to be assigned to each, MOLA determines a compromise solution that attempts to maximize the suitability of lands for each objective given the weights assigned.

For input, MOLA requires a set of ranked suitability maps for each objective. These will have been produced using the RANK module from suitability maps scaled to a byte (0-255) range. These suitability maps will most likely have been produced by means of the multi-criteria evaluation module named MCE.

MOLA first requires that you enter the number of objectives to be incorporated into the analysis. In this version, up to 15 may be entered (although typical problems have only 2-3). Then input a name for the output image to be produced.

MOLA next requires that you specify the total areal tolerance to be used. The default is 100 cells. The areal tolerance refers to the point at which MOLA will decide that it has come close enough to satisfying the area needs of the objectives for it to stop its iterations. The default tolerance of 100 cells indicates that MOLA can stop when all objectives are within 100 cells of their desired area needs. A value of zero may be set, in which case an exact solution is found.

You then need to enter a descriptive caption and a weight to use for each objective, as well as the rank map and areal requirements. The captions are used for constructing a legend for the output map, and can be up to 12 characters in length. The weight determines the relative weight that that objective will have in resolving conflicting claims for land. You may enter any value you wish, but the number must have meaning in a relative sense. For example, entering a value of 10 for one objective and 20 for another indicates that the latter should have twice the strength of the former.

Then input the name of the ranked (suitability) maps to be used for each of the objectives, and their areal requirements (specified in cells). Click on the right arrrow of the spin button to open additional blank input boxes for the remaining objectives. The number underneath indicates for which objective you are entering the settings.

After all objectives have been entered with their respective weights, MOLA divides each weight by the sum of weights to convert all weights to a 0-1 range such that they add up to a value of 1. For example, with only two objectives given weights of 10 and 20, these will then be divided by the sum (30) to arrive at final weights of 0.33 and 0.67. Finally, enter a title for the output image to be created. Processing will then begin using an iterative approach.