How long is a piece of string?

A key decision in any 360° feedback project is how long to make your survey. Here I show how smaller isn’t always better.

The obvious argument for making a 360° survey smaller is to save people time, but this can come at the expense of depth, breadth and clarity of feedback. And we often see customers try to ‘squeeze together’ behaviours to reduce the number of questions. Here’s a real example we’ve seen from the standard 360° questionnaire (a mere 24 questions long) used by one of the biggest retailers in the UK:

“Gains commitment to achieving results through simple communication, actively listening and by adapting leadership style to the needs of others”.

I spot at least 4 different behaviours in there! A manager could easily be good at some but poor at others. This complexity makes it difficult for reviewers to understand and score the question, and hard for the recipient of the feedback to interpret the results. Both practically and psychometrically it is virtually useless!

Our approach is to work with our clients to develop a set of items that are simple and easy to understand, measure observable behaviours, and have the breadth to cover all of the important competencies. This can make for a longer survey, so what’s the time impact of all those extra questions?

We’ve been tracking how long people take to do their questionnaires, and have found some interesting results. The time taken to complete different aspects are as follows:

  • Each rating on a 1-to-5 scale: 7 seconds
  • Each word of written feedback: 7 seconds

It’s the written feedback that makes the biggest difference. Here are two real example projects we recently implemented:

Project Rated Items Total written feedback (average) Average time taken
HR Consultancy 44 177 words 19 mins
Engineering Company 104 42 words 14 mins

Despite having less than half as many items, the HR consultants took longer to complete their surveys than the engineers!

So fewer questions does not necessarily mean less time, but as seen above, fewer questions can result in unclear, incomplete and hence less insightful feedback.

We are committed to enabling truly powerful feedback. Demands for “quick and dirty” solutions are best met with the question “what do you really want to achieve?”


6 thoughts on “How long is a piece of string?

  1. Excellent post, Mark. This is a problem we also encounter when we’re reviewing question sets that have been used in some of the “cheap and dirty” solutions.

    Another point to consider when thinking about the time to complete a survey is the nature of the organisation. If a large proportion of staff are scientific or highly numerate, they may be less comfortable with text responses. In your example, the engineers may well have found the rating responses easier (and therefore quicker) than the HR consultancy staff, simply because they work in absolutes a lot of the time.

    Thanks for the great article.

  2. Mark,

    Thanks for this – the stats are really useful. We share your avoidance of “quick and dirty” solutions. I guess the really interesting thing for me is whether the 19 minutes spent answering the HR questions gave better feedback to work with than the 14 minutes people spent answering the engineering company questions. We generally find that the narrative feedback is more useful than rating scale scores and use that insight when trying to gauge “how many” questions are needed.


  3. Brendan,

    I completely agree – the really powerful feedback is often textual, and those 19 minutes were probably ‘richer’ thanks to all that text feedback.

    However, I think it’s a mistake to rely too much on text feedback. Teasing out good quality ratings can be really valuable when clients are interested in aggregate statistics. And it’s all too easy for raters to not bother writing much (such as when it’s not in the company’s culture, as Vandy alludes to), in which case it’s really important for the design of the rated part of the questionnaire to be good enough to deliver insights without comments.


  4. Mark – the data you provide here is very useful! I’ve personally opted for more information over less. Coaching staff towards answering the questions as fully as possibl!e is the key! I’ve also preferred electronic 360 Feedbacks vs paper based because of the speed and convenience!

  5. I greatly appreciate your overall point. I’m a huge fan of using Computer-Adaptive Testing for 360 surveys, to reduce the time without many of the problems you note above. Similarly, I’m equally fond of “Mike” Linacre’s Facets method that is ideal for 360 surveys, and I’ve used to adjust for severity/leniency bias. I have a white paper on this on my website if you’d like more –

  6. Mark,

    To echo those below – thanks for this.

    Providing the right balance of ‘free text’ or narrative comment fields throughout the survey is also important – too many boxes become time consuming and laborious for the rater; too few make it difficult to provide summary comments against a multitude of questions spread over several competency areas. Our own research indicates that the right balance is three to five free text comments boxes spread throughout the questionnaire. Thanks again, great bogg

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s