1) Can you help me design the questionnaire?
Creating a really great 360 feedback tool is a mixture of politics, science and art. There are lots of factors to consider, and lots of decisions to make. These decisions have an impact on how quick and easy the questionnaire is to complete and how useful the end result is to the feedback recipient.
At its heart a 360 feedback questionnaire will consist of a set of behavioural descriptors that reviewers will need to rate on a numerical scale, as well as some open text questions. The behavioural descriptors will normally be organised into ‘competencies’ or themes to aid understanding in the final report, though this structure need not be visible in the questionnaire itself. There are other sorts of things you might also add, but this is the ‘classic’ core of a 360 questionnaire.
The benefits of a good design of 360 feedback tool are that the people completing feedback find it quick and easy, and the individual will get clear and helpful insights from the results. A bad 360 design will be slow, confusing and irritating to complete and the individual will struggle to draw any meaningful or actionable insights from the report. Be aware, there are lots of bad 360 designs out there!
If you are not already an expert in creating 360 feedback tools (which you can only really claim to be if you have created several different 360 tools before and have been able to see the full impact of different design decisions), then we have printed on the next few pages a complete guide to how to create a great 360 feedback questionnaire.
Talent Innovations’ 12 principles of creating a great set of 360 feedback questions:
You will need to compile a set of behavioural descriptors that fit with the following principles:
Individual 360 items:
A. Simple non-colloquial language
So meaning is clear to all
B. Concise: As few words as possible to convey the meaning
So quick and easy to complete
C. Observable behaviours only (“what would you see the person actually doing?”)
Observations of behaviours are all that raters could possibly comment on!
D. Single concept per item (“could someone be good at one part and bad at the other?”)
If not ‘single’ then it will be difficult to answer and confusing to interpret the results.
E. No need to reference WHY someone would do something (e.g. “Shows this behaviour IN ORDER TO get this outcome”) This would break principles B, C and D
F. Avoid repetition of concepts (ie. no 2 items should be too ‘close’ to each other)
No need to ask about the same thing more than once – it just annoys people completing the questionnaires, who will notice!
G. Avoid repetition of language
Even if 2 questions are about different concepts, if they re-use a distinctive turn of phrase then people tend to find that annoying because it feels like they’re being asked the same question twice.
H. The items should align with the headings/competencies they are in
So that any competency averages are meaningful
I. The items in a heading/competency should collectively cover everything important in that heading
So the 360 content accurately represents what that competency means
J. The set of competencies (and their items) should cover everything that is critical for effectiveness/success for the people who will be taking part
So you can be sure that people will be able to see where they need to develop.
E.g. If a model for leaders is all about ‘leadership’ but doesn’t cover anything about communication, a leader who is terrible at basic communication skills may not be able to see that his poor communication skills are the root cause of his problems being an effective leader.
K. The set of competencies (and their items) should cover the areas that are a current focus for the organisation / that group.
So the 360 tool can help move people forward in these focus areas.
L. As few items as possible without breaking the other principles
So it doesn’t take long for people to complete the surveys.
Tensions to watch out for:
Many of the principles above can pull you in different directions. This means that the best design will need to strike the right balance according to the priorities of the project and organisation. It is very valuable to be fully aware of these tensions.
The main tension is in questionnaire length. A, B and L (as well as F and G) are important to having a quick and easy questionnaire, but D, E, I, J and K will all pull you towards having more items.
In particular, an excessive focus on L (few items) combined with I & J (covering every important behaviour) can often lead people to try to ‘cram in’ lots of things into each item, which breaks principles B (be concise) and E (single behaviour per item). The solution can be to accept that you may have quite a lot of items (even as many as 100 in some cases) but if they all align with the other principles their simplicity, easy observability and non-repetitiveness will mean that it actually doesn’t take long for people to complete the questionnaire. Well-designed questionnaires like this often take reviewers only 5 seconds to rate per question – that’s less than 10 minutes for a 100-item questionnaire, but virtually guarantee that the resultant feedback report will be genuinely insightful and useful.
Other things to consider:
• 360 raters have a tendency to be rather positive in their scores (It’s common to see average scores of 4 out of 5 across all 360s in a project), so the questions are more informative if they are made harder to rate highly, for example by adding a “very”. This is best checked by running the tool with an initial/trial group, and then reviewing the statistics on mean scores and standard deviations.
• Behaviours need to have consistent grammar, appropriate to the “stem” question that introduced the behaviours.
…If all that sounds really complicated, then unfortunately that’s true. And that’s why there are so many bad 360 feedback systems out there. How can your chosen provider help you avoid those mistakes?
To read the whole guide, download it for free here!!