Friday, December 5, 2014

Project evaluation wrap-up

Carlos Bana e Costa introduced the MACBETH approach to multi-criteria evaluation as a means to support decision making, some 20 years ago. In a nutshell, he showed that the analytic hierarchy process didn't work well and presented a solution: MACBETH.
In the past few articles, I've been covering the problems that can arise when evaluating (and that Carlos Bana e Costa fixed with his MACBETH) so you may want to start from there:


These articles are not about the MACBETH but about the problems it solves and how to work around them.


Context

The real problem is that you can have a transparent, objective process to evaluate projects and still get unexpected, incoherent and inconsistent results. Even worst, it is possible for you to manipulate the results while having a transparent, objective process that can even be audited!
But this article is not about how to con others. On the contrary, this article is to alert you about mistakes that you can make - and that you can avoid - when evaluating different options.

Problem #1: The chosen election method can determine the winner

On "Dangerous mistakes when evaluating: election methods" I showed you how using election methods can undermine an evaluation and bring unexpected results - and, given an election result, setting a winner just by choosing a particular election method. So the solution is simple: don't use election methods when you want to rank preferences!
So avoid elections if you can. But if you must use them, make sure you use the simplest election models possible (like voting for a winner instead of voting for your preferences).

Problem #2: You can change the winner by adding or removing participants

When you rank your possible options, all you need is 3 or more criteria in order to get unexpected results. That's what I showed you on "Influencing results: just participate!" where adding or removing a bad option (meaning an option with a low score) from the competition changes the winner. Again, the solution is simple: do not rank options! You can actually rank your options after scoring them, but don't rank them in order to give them a score.

Problem #3: Weights can determine winners

The problem with weights arise in conjunctions of ranking your options - but it makes the problem a lot bigger. That's what is shown on "Weights do influence, big time" where just a slight change on the score of the worse option in competition (and for the lowest weighted criteria) changes the winner. Again, this doesn't make much sense, right?
So what's the solution? Check the result's consistency by means of sensitivity analysis: do the results remain the same with a slight change in the weights? And a slight change in the option's scores?

Problem #4: Scales can also determine winners

Just remember that scales have to make sense for what you're measuring. Some examples are provided on "Beware of scales", namelly:

  • Linear scales
  • Benchmarking
  • Categorization

Conclusion

You can have a process to evaluate your options that you can manipulate and at the same time ensure that the process is transparent and objective - and you can even audit it! To avoid this please remember these simple rules:

  • Avoid elections
  • Never rank your options
  • When using weights, test the results if you make slight changes in the weights
  • Make sure you use appropriate scales for what you're measuring
And please, don't ever, ever make an option because is was the result of the process you used to help you decide. These methods are meant to help you decide, not to make the decision for you. You should always question the results: Do they make sense? Is it a much better options then your second best option?

No comments: