Friday, May 9, 2014

Influencing results: just participate!

Just participate!
After checking how the election method we use can determine who wins an election (being that an election can be used to select which project to start next), it's now time to check how we can determine who wins by introducing or eliminating options.
That is, given:

  • a set of projects to select what the best option is, and
  • the criteria we're using to evaluate them

We can determine what the best project is just by introducing extra projects or removing options from the lot. And yes, you read this right. But you don't think it's possible, do you? Please keep on reading then!

Scenario

You have to select what project is the best option from a lot of 4 projects, let's say projects A, B, C and D. You have some experience with this sort of decision and decide that sorting your options from best to worst regarding scope, cost, time and how good the main supplier is will be enough for you to make a decision. In fact, the criteria is completely irrelevant for what follows. And so you evaluate your projects on each criteria and rank them for each criteria as well. In the end, you get the following table:

Projects ranked on each criteria

Now you set a score for each position: 4 points for the best project, 3 points for the second best, 2 for the 3rd best  and 1 point for the worst project. So project A gets 3 points on the Cost criteria (2nd position), 4 points on the Schedule criteria (1st position), 2 points for Scope (on 3rd position) and 3 points for Supplier (on 2nd position). In the end, you get the following scores:

Projects with scores

And you get a clear winner, project A, with 12 points. Although not a very comfortable win as project B was 2nd with 11 points, a difference of just 1 point. So you have a winner and start working on getting project A started. You schedule a meeting with the Board within 2 weeks and start working on the presentation supporting project A's go-ahead. So far this is all business as usual. Right?

Bu then...

Right! Up to the time where a key supplier involved in project D goes bankrupt. Obviously you decide to remake the tables supporting your decision. This was not something really important as project D was not even close to get picked as the best project - in fact project D was the worst project you evaluated. But project D is not an option anymore so it shouldn't be in the presentation anyhow. You then compute the scores again, this time without project D, and you get the following results:

Projects ranked (without project D)

You just adapt the scores (as there's one less position) and after computing the scores you get:

Projects with the new computed scores

And then you notice that project B is now the one with the best score: 9 points.
Oops.... You think that obviously you made some mistake, so you check and double check everything just to find that you were right all the time. Oh boy, you think: what is going on here? You had a clear winner - project A, you removed from your model an option that wasn't really important anyhow - project D - and by removing that option you get a different best project? How is this possible?!
Even worst, project D was the worst project anyway, how can it be that removing it from the lot of projects being evaluated (and not changing anything else) ends up in a new winner?

The problem

When you rank things you're comparing them in a linear scale. If you have an intermediate candidate between 2 candidates, like the scenario just exposed, project B gains one extra point relatively to project A when the intermediate project (in this case, D) is removed. Just like the previous example.

Ranks before removing project D

So because of this, on each criteria where project A and B are separate by project D, project B gets 1 point closer to project A when you remove project D. Let's see how this works. So before project D was removed, project A got 4 points for the Schedule criteria and 3 points for the Supplier criteria while project B got 2 points for the Schedule criteria and 1 point for the Supplier criteria. The totals for these 2 criteria are: 7 points for project A and 3 points for project B - a 4 points difference. But after removing project D:

Ranks after project D is removed

Project A gets 3 points for the Schedule criteria and 2 points for the Supplier criteria. Project B gets 2 points for the Schedule criteria and 1 point for the Supplier criteria. The totals for these criteria are now 5 points for project A and 3 points for project B - a 2 points difference. And this is the problem: removing an option, even if it is the one with the worst classification, can shortened the difference between other options and, in extreme cases such as this one, it can make a different winner!
But this can also work the other way around too: suppose you have these 3 projects that you're evaluating to get one of them selected. For some reason, and if you're dishonest enough, you can get another project to get selected just by inserting a new project into the lot to evaluate. Just suppose you have projects A, B and C like in the previous scenario. Project B wins this evaluation, but you don't want project B to win. So you set up a project D to join the others, evaluate them and... now project A is the winner!
Yes, it works both ways. And yes, you can make mistakes just because you don't know better but you can also make mistakes because you're dishonest...

The danger

But dishonesty is not the real danger here. The danger comes from your best intentions when setting up an objective, transparent way to evaluate options when in fact your approach of ranking the project candidates from best to worst is the main problem. There are some other problems with this scenario, but this is the main problem: never, never, never, never rank to evaluate something if you have more than 2 criteria! Ever!
This is what is most dangerous. If you read again this article real carefully and compute the scores in a spreadsheet and all, you'll reach the same results: the real danger is that everything we're doing is completely objective and transparent. But regardless, we can get whatever the result we want (or almost)!
Again, just like on "Dangerous mistakes when evaluating: election methods", having an objective and transparent way to evaluate options is not enough. Once again, you start off with an objective approach, but in the end that doesn't make the process objective, fair and consistent. In the end, it requires you to double check if the results make sense.

Conclusion

Even when things look objective, fair and consistent, that doesn't mean that the results are also objective, fair and consistent. So the rules of thumb compiled so far are:
  • No election methods when evaluating
  • No ranking with more than 2 criteria
  • Don't just accept the end results of any given evaluation method


Image from http://www.icts.uiowa.edu/

Posted by

No comments: