Friday, May 16, 2014

Weights Do Influence, Big Time

Weights do influence, big time
I've been exploring the unexpected effects that can occur when evaluating/selecting projects and you can check the previous articles on this topic ("Dangerous mistakes when evaluating: election methods", and "Influencing results: just participate!") to start on the same page.. This article will continue on that direction and this time I'm going to show you how weights can influence the end result. That is, I'm going to provide an example where we evaluate 3 projects and get different best projects with small changes on the less important criteria (the criteria with the least weight) for the worst project evaluated. Funny thing, it that the weights attributed to each criteria will make a different project the winner!
You don't believe it, do you? Good, just keep on reading then!

Scenario

So picture this scenario: there are 3 projects where your organization can invest on and you're to decide which is the best option for the organization. For simplicity's sake (and because the criteria is not what we'll be discussing in this article), you chose to support your decision on the net profit in 1 year (that is, how much money will the project bring to the organization in the 1st year after covering the investment made) and the net profit in 2 years. And you attribute a weight of 75% to the net profit in the 1st year and just 25% for the second year. Again, this is not a perfect criteria, but it does make sense: the sooner you have your money  back the better, right?

Getting data

In short, the net profit forecasted for the the 1st and 2nd years are as follows:

Forecasted net profit
By looking at this table, it's pretty obvious that project C is out of the competition as it is the worst project on both criteria. But it's difficult to say what is the best project just by looking because althought project A has the best net profit for the 1st year, it's pretty close to project B's net profit. And project B compensates this small difference for the 1st with a huge difference (60.000 € compared to 5.000€) for the 2nd year. So it will be a close call, I'd guess. But lets do the math OK?

Computing the results, part 1

One thing we can do (in fact, much like what I've shown you on "Evaluating, ranking and deciding - The Analytic Hierarchy Process way", is as follows:
  1. Select the 1st criteria (in this case, 1 year profit)
  2. Find the best value (in this case, 30.000€ for project A) and score it 100%
  3. Find the worst value (in this case, 10.000€ for project C) and score it 0%
  4. For all the other projects, compute their score by checking what is their score on the straight line formed by the points you determined previously for the best and worst scores - in this case, points (10.000, 0) and (30.000, 100) result on 75% for project's B score
  5. Select next criteria and repeat from step 2
Step 4 (details) 
If step 4 is clear for you, you can skip this part. If not, here's the details that will allow you to use this method when needed. 
In order to compute the scores for all the projects (other then the best and worst projects), you can imagine the points (10.000, 0) and (30.000, 100). Now these points are constructed as the coordinates (worst value, worst score) and (best value, best score). These 2 points points define a straight line that you can plot as the following graph shows:

Straight line for the scores

And now you can plot all other projects' values on this line. You can use the equation for a straight line defined by 2 points:
Equation for a line defined by 2 points (x1, y1) and (x2, y2)
Computing project B (and so making x = 25.000 on the previous formula) results on the score of 75% and you can even plot it on the previous score graph:

Project B ploted on the scores chart
It's pretty easy an it can be completely automated using a spreadsheet.
So when you end these calculations for each criteria, you should end up with something like this:

Criteria scores computed
So just let us check if this makes sense. Take project A's score for the 2 years criteria, for instance. What the scoring table says is that the score is worth 50% when compared to the other option, that is, project A is halfway between the best option, project B, and the worst option, which is project C. And in fact, 90.000€ is halfway between 150.000€ and 30.000€, right? So everything looks pretty good so far.

Computing the results, part 2

By now we have a score for each project and for each criteria (the previous table). Now what is left to do is to compute the total score for each project so we can rank them and compare them. As the short term profit is much better than the long term profit, we'll give a 75% weight to the 1 year net profit and the remaining 25% to the 2 years net profit, that is, we split the total of 100% between the defined criteria (in this particular case we only have 2 criteria):

Total score computed for each project
And we have a winner! Project A gets a total score of 88%, a comfortable win over the 2nd contestant, project B.
So now we can start working on documenting the decision, get the approval, get the team together and so on. The process is crystal clear, objective and every step is supported by strong arguments that are fully documented. What can go wrong?

Famous last words...

Thinking "What can go wrong?" is reason enough for you to get worried, but lets continue with this tale.
When going through the documents related to each project to support the decision, you find out that somehow you made some mistake on the spreadsheet you used to compute the net profit and the value for project's C "2 years net profit" is in fact 85.000€, not the initial computed 30.000€. So the correct table for the net profit is:

Corrected forecasted net profit

No problem here, your mistake was on the worse project (that continues to be the worst project anyway) and the error is with the least important criteria. But...
When you update the value in your spreadsheet you notice that the end result (the table with the total score for each project) is now:

Corrected total score computed for each project

First thing you see is that project A is no longer the winner - project B is. And by a 4% difference, not far from the previous 7% difference when project A was the winner. So you double check every formula you used on your spreadsheet only to find that you did a perfect job.
What is going on here? How is this possible?!
You then start digging and comparing the values you notice that project A's score drops from 50% to just 8% for the "2 years net profit" criteria. And this makes sense since project A is not that much better than project C anymore, not on this criteria. So this feels right. But then you remember reading the last article "Influencing results: just participate!" and its call not "to compare contestants".
If it weren't for the weights (75% for the "1 year net profit" and 25% for the "2 year net profit"), the result could have been the same all along! That is, if there were no weights and if you just summed up the scores for each criteria, the result would be the same in both scenarios just as shown below, and project B would win in both scenarios with 175%:

Total scores with no weights

In fact, this would be the same as giving the same weight of 50% to both criteria. and that would make sense again for having the percentages as total scores (75% instead of 150% and so on).

The problem

We have 2 problems at stake here. One was covered in the previous article "Influencing results: just participate!" and it has to do with scoring by ranking your options. Comparing options in order to score them is always a bad idea if you have more than 2 criteria. You should always score them and then compare them using their scores, not just ranking them.
The other problem has to do with weights and I have a true problem explaining exactly what the problem is. But it is already pretty obvious with the example I provided what is going on. That is, you can change the results of such a selection simply by introducing weights: in the last table, there is no change as to the winner because of a change in the worse project. But if there are different weights for each criteria (like int the initial example), there can be a change of winners.
Now the real problem is this: weights do make sense a lot of the times - if not all the time. So what can you do about them?

The solution

The solution is, again, not to rank options. Period. If you don't rank your options, weights are OK.
But if you must rank your options (it may even be some kind of a requirement - a bad one), the solution I have for you is not very neat. But it works and it doesn't turn your evaluation model so complex that no one understands it. All you have to do is perform some very basic sensitivity analysis. And it works like this:
First you have to get your evaluation model completely automated (like in a spreadsheet). And then put all your actual inputs there and see what the results are. Now what happens to the results when you change your initially inputs for the worst options just a little bit? And what happens when you change them a lot? Does the winner change? If the results are consistent, that is, if they remain the same even when the changes on the worst options are significative, then you can have some degree of confidence on their consistency. If not, you better use some other evaluation method.

Conclusion

Tthis is the third example of an evaluation that looks neat and simple - but it isn't.
So the rules of thumb compiled so far are:

  • No election methods when evaluating
  • No ranking with more than 2 criteria
  • Don't just accept the end results of any given evaluation method
  • Do some sensitivity analysis whenever you use weights on ranked options (not on the weights themselves but on your options' scores), even if it is a very basic sensitivity analysis

Image from http://img1.wikia.nocookie.net/

Posted by

1 comment:

mba project management said...

really professional blog totally awesome