7. Is it most effective to have 8 – 10 raters?
They key debate in this area is around the question „do more raters lead to more useful and reliable information?‟
Best practice states that individuals undergoing a 360° feedback process should select between 8 to 10 raters, but why, why not select 4 – 40? The purpose a 360 process is to obtain multisource feedback. The number of rater categories and the number of raters within each category is not definitive. There are some key preferences outlined for which the benefits are clear, for example raters should only provide feedback to those they know well in a work related context; and, those they have worked with in the past 12 months. There should also be a minimum of two raters per category to preserve anonymity and avoid the need to merge ratings together; merging ratings protects anonymity but removes important distinctions between rater groups so is not preferable. This suggests a lower limit of two raters per category but does not help define an upper limit, or indeed the number of categories.
There are a broad range of benefits from having a large number of reviewers, these include: more perspectives will provide a more complete picture of personal strengths and development needs; and, more individuals per group makes it harder to attribute feedback to any one rater. It could however be argued that more information does not necessarily equate to more quality as the feedback may not be detailed enough to be useful for development.
Additionally, the more raters: the greater the cost in man hours to the organisation; the more reliance there is on an intelligent software platform to cope with the number; and the higher the risk of ratings showing a regression to the mean. Furthermore, research has shown no relationship between the number of raters and performance improvements in the focus.
It may therefore be advantageous to focus on the minimum number of raters needed to obtain useful feedback. Research by 3D Group showed that 65% of organisations had a minimum of 3 raters per rater category; they also showed the most common rater categories to be: self (92%), boss (94%), peer (96%) and direct report (98%). This suggests a minimum of 8 raters in total (3 per group except self and boss). 3D Group also showed that 46% of those questioned required all an individual‟s direct reports to provide ratings.
Overall, although there is some logical indication of the minimum number of raters that should be selected, there is no logical maximum, only a consensus view. At Talent innovations we advise between 6 and 15 raters. We also advise that raters are separated into a minimum of 4 categories (self, manager and two others e.g. peers, direct reports). This is a sufficient number to ensure anonymity and the ability to distinguish between group differences, and is not too great to risk of regression to the mean (where there are so many raters per category that when results are presented as an average it shown nothing more than the mean).
Next to come in this series is ‘who should see the feedback report?’
For the full paper, download it for free here.