Rank the following options in order of their likely effectiveness or the extent to which they reflect ideal behaviour in a work situation.
- Make a list of the patients under your care on the acute assessment unit, detailing their outstanding issues, leaving this on the doctor’s office notice board when your shift ends and then leave at the end of your shift.
- Quickly go around each of the patients on the acute assessment unit, leaving an entry in the notes highlighting the major outstanding issues relating to each patient and then leave at the end of your shift.
- Make a list of patients and outstanding investigations to give to your colleague as soon as she arrives.
- Ask your registrar if you can leave a list of your patients and their outstanding issues with him to give to your colleague when she arrives and then leave at the end of your shift.
- Leave a message for your partner explaining that you will be 30 minutes late.
How would your ranking change if you knew the following about the situation?
You are just finishing a busy shift on the Acute Assessment Unit (AAU). Your FY1 colleague who is due to replace you for the evening shift leaves a message with the nurse in charge that she will be 15 to 30 minutes late. There is only a 30 minute overlap between your timetables to handover to your colleague. You need to leave on time as you have a social engagement to attend with your partner.
(Example from UKFPO SJT Practice Paper © MSC Assessment 2014, reproduced with permission.)
The use of situational judgement tests (SJTs) for selection into education, training and employment has proliferated in recent years, but there remains an absence of theory to explain why they may be predictive of subsequent performance. The name suggests that the tests are an assessment of a candidate’s ability to make a judgement about the most appropriate action in challenging work-related situations; suggesting that the tests must include descriptions of such challenging work-related situations. But your ranking of the possible actions listed above probably did not change much (if at all) once you knew the exact details of the situation compared to when these had to be deduced from the possible actions listed. A similar finding was recently reported in a fascinating experiment conducted by Krumm and colleagues, with volunteers randomised to complete a teamwork SJT with or without situation descriptions. Those given the situation descriptions scored, on average, just 8.5% higher than those not given the descriptions. Of course, consideration of the need for a situation description is only possible for SJTs in a format where possible actions are presented to candidates (commonly known as multiple choice), but this format is generally used in practice as it facilitates marking and scoring.
Krumm et al.’s findings clearly raise doubts as to the intended construct of the test (i.e. the candidate’s judgement of specific situations); yet SJTs are predictive of workplace performance, with correlations of around 0.30 reported in meta-analyses (see for example McDaniel et al.). So if a SJT doesn’t actually require a “situation” to enable a useful assessment of a candidate’s likely future performance, then what exactly is the assessment of? Lievens and Motowildo  suggest that it is of general domain knowledge regarding the utility of expressing certain traits, such as agreeableness, based on the knowledge that such traits help to ensure effective workplace importance. The implication of this theory for practice is that SJTs may not need to be particularly specific and could therefore be shared across professions and geographical boundaries, making them a particularly cost-effective selection tool. The implication for research is that we need more evidence on the antecedents of general domain knowledge, such as family background, both as part of theoretical development and to evaluate the fairness of SJTs for selection.
And what if one does actually desire an assessment of situational judgement as opposed to general domain knowledge, since both have independent predictive validity for job performance? Rockstuhl and colleagues suggest that candidates need to be asked for an explicit, open-ended judgement of the situation (e.g. “what are the thoughts, feelings and ideas of the people in the situation?”) rather than what they think is the most appropriate response to it. The nub here is whether including open-ended assessments to enable measurement of situational judgement is cost-effective given their incremental validity over general domain knowledge and the cost of marking responses (with at least two markers required). For the moment we simply note that a rather large envelope would be required for even a rapid assessment of selection utility!
— Celia Taylor, Senior Lecturer
- Campion MC, Ployhart RE, MacKenzie Jr WI. The state of research on situational judgment tests: a content analysis and directions for future research. Hum Perform. 2014; 27(4): 283-310.
- Krumm S, Lievens F, Hüffmeier J, et al. How “situational” is judgment in situational judgment tests? J Appl Psychol. 2015; 100(2): 399-416.
- McDaniel MA, Hartman NS, Whetzel DL, Grubb III WL. Situational judgment tests, response instructions, and validity: a meta‐analysis. Pers Psychol. 2007; 60(1): 63-91.
- Lievens F, & Motowidlo SJ. Situational judgment tests: From measures of situational judgment to measures of general domain knowledge. Ind Organ Psychol. 2016: 9(1): 3-22.
- Rockstuhl T, Ang S, Ng KY, Lievens F, Van Dyne L. Putting judging situations into situational judgment tests: Evidence from intercultural multimedia SJTs. J Appl Psychol. 2015; 100(2): 464-80.