Tag Archive for: Evaluation

Evaluating bids and tenders is not perhaps the sexiest topic within the buying world, and perhaps because of that it does not get the attention it deserves. I remember a few years back, the UK government issued a detailed 100-page guide to running public procurement competitions, but pretty much the entire section on evaluation read, “now evaluate the bids”!

And yet, if the evaluation process is not structured and executed properly, it can lead to problems – selection of the “wrong” supplier that will not best meet your needs perhaps, or unhappy suppliers and legal challenge in the public sector.

One seemingly minor but important point relates to how bids are scored. For major purchases, it is usual to have multiple people on the buy-side reading and scoring the suppliers’ proposals. So there might be three of four people all reading and scoring the same answers to questions like “explain how your quality processes will help to ensure you meet our needs….” 

I was recently advising a firm on how they could compete better for public sector business. I looked at tender documentation from a bid they had lost, and whilst the feedback from the buyer to the firm was somewhat ambiguous, it looked like the individual scores of the bid evaluators had been averaged. That is, in my opinion, the wrong approach, and this is why.

Let’s imagine you have three people doing that work, and that the scoring system is a basic 0-5 scale where 5 is a brilliant response and 1 is pretty rubbish. Evaluator A scores 1 out of 5 against that question. Evaluator B scores 5 out of 5, and C scores 3 out of 5. The average is therefore 3.

But we know that there is a very good chance that 3 is not the appropriate score. We also know that A and B have seen the supplier response VERY differently. One of them might be right in their scoring; but we really need to know why there is such a difference. They can’t both be right!

So we need a process of moderation. Someone, and I usually advise that the moderator should not score the bids themselves (although they do have to read them), chairs a discussion to arrive at an agreed moderated score.

It may be that scorer A has identified a major flaw in the response that the other two missed. Or A has herself missed a key part of the answer (I have literally seen a marker not notice a key project plan attached to the document). Perhaps B just loves this bidder, and needs talking down from his over-enthusiastic marking.  And if you only had two scorers who marked it 1 and 5, then 3 would almost certainly be the wrong answer!

We need to arrive at a single agreed score, which could in this case feasibly be anything from 1 to 5. Maybe it will end up as 3; but not via an averaging process. I’d also strongly suggest that in the public sector, you don’t document any initial individual marking; you record the key points of the discussion, which is important if the end result is ever challenged, and the end result.

So in our case, if the score ends up being 4, you might note that scorer A initially had some concerns but was reassured when she was pointed to the project plan in the appendix (or whatever). When I chair moderation meetings, I ask the participants to come along with their initial view of their scores, but I don’t want those in advance and I don’t want them formally recorded.

That’s not being devious; it is just recognising that we are going to do the scoring on a moderated, team basis. And yes, I admit, I don’t want a disgruntled supplier saying, “how come the CIO initially gave us a mark of 5 on that response, but we only ended up with a 3”?

Anyway, this might seem like a fairly technical aspect of potential Bad Buying, and indeed it is. But there have actually been some very expensive legal challenges that hinged to some significant extent on dodgy scoring and suspect averaging or moderation processes. There is a great example in my book actually, one that cost the UK taxpayer over £100 million believe it or not.  (Pre-order the book now… out on October 8th).