Desired OSCE competency levels for advanced clinical practice students

Advanced clinical practitioners are educated at master’s degree level, but at what standard are students assessed and is there a competency level that the student must meet?

Abstract

Advanced clinical practitioners are becoming a substantial part of the clinical workforce in the UK. Education provision to develop individuals into these roles is now standardised at master’s level and includes clinical elements. This article presents a discussion around setting the competence level desired for students undertaking Objective Structured Clinical Examination assessments for an advanced clinical practitioner master’s programme.

Citation: Pinson S (2023) Desired OSCE competency levels for advanced clinical practice students. Nursing Times [online]; 119: 11.

Author: Stuart Pinson is trust lead for advanced clinical practice, Lincolnshire Partnership Foundation Trust, and former senior lecturer, De Montfort University.

Introduction

The NHS’s (2023) long-term workforce plan emphasised the role of the advanced clinical practitioner (ACP) and built on the framework for advanced practice developed by Health Education England in 2017 and The NHS Long Term Plan, published in 2019. As the number of ACP roles increases, the demand for academic courses has also increased. This education must include history taking and physical examination.

I have worked and examined at three higher education institutions (HEIs) across the UK, all of which offer numerous courses in physical examination and consultation skills, at both master’s and undergraduate degree levels. Most of these courses had some form of practical testing as part of their assessment, in the form of an Objective Structured Clinical Examination (OSCE). As a student, I underwent OSCE assessments at the medical school where I did my master’s degree.

The question is: to what standard are students assessed? Is there a ‘competency’ level (a term that is fraught enough and one to which we shall return) that the student must meet? In essence, while the pass mark is one thing, how does a student gain a mark in the first place? The benchmark in mind when designing these assessments was that of a newly qualified medical student, but this may be expecting too much and may be out of kilter with wider practice: the presence of the ‘curse of knowledge’ – a cognitive bias that results when the expert forgets what it was like to not be an expert (Duvivier and Veysey, 2016) – is perhaps to be acknowledged here.

This paper presents thoughts that stemmed from discussions held around the setting of a standard that students must achieve for an OSCE, which arose following the design of a curriculum for trainee ACPs. Discussions involved myself, the faculty at medical and nursing HEIs, and practice partners. A new possible benchmark is proposed. Although this discussion concerns non-physician students undertaking what were previously medical roles, the conversation is equally applicable to medical students.

Lack of standardisation

There is much debate in the literature around OSCE standard setting and examinations. In guidance supplied by the University of Liverpool (nd) to students and examiners, it is suggested that, when performing history taking and examination, students who meet expectations should perform to the same standard as a safe and competent first-year doctor – there was no description of what ‘competent’ means in this case. The student is not expected to be an expert, but to meet a standard of what would generally be expected by a first-year doctor.

In contrast, a Dutch study by Haring et al (2014) suggested that, when a defined competency level is set, the achievement of this level is erratic; the authors stated that only 60% of the students they observed in the final year of their (medical school) training were ‘competent’ according to the set standard.

Malau-Aduli et al (2017) conducted an extensive review of practice in Australian medical schools but, again, this was focused on determining the threshold for the OSCE and, beyond a passing mention that the criteria were set by subject matter, experts did not state what competence was per activity within a station. A worrying finding from that paper, and one that exemplifies the subjective nature of this process in general, was the fact that a student could pass or fail the station with the same performance, dependent on the standard setting method, which highlights the need for a consistent approach.

Loh et al (2016) supported borderline regression to set standards but, again, this was only to set the cut-off point; they also stated that the actual standard to be achieved to score in the first place was a judgmental and arbitrary process.

There is a paucity of literature seeking to define competency via a set standard and most other attempts do not define a standard. To help set the standard of acceptable performance to gain a mark, the Angoff method – or a variation thereof – might be employed. In brief: a panel of experts decides at what level a borderline candidate would pass or fail the station and sets the pass mark accordingly (Dwyer et al, 2016). This still only considers the overall pass mark required for the station, and not what an individual must do to obtain a mark in the first place.

Considering nurse education, Ljungbeck et al’s (2021) comprehensive scoping review of nurse practitioner education clearly demonstrates that nurse practitioners feel it is important that they:

  • Are educationally prepared for their role;

  • Have time to develop the skills;

  • Feel confident in these roles.

This suggests a slightly different perspective to medical literature and may reflect the developing and unclarified role of the nurse practitioner when compared with that of the medic. Student nurse practitioners value the OSCE and the face-to-face assessment it provides (Taylor and Quick, 2020), even though they find it stressful and experience variation in standard setting (Harden et al, 2015).

One aspect of medical school OSCE that nurse practitioner students would almost certainly not embrace is the notion of norm referencing (Park et al, 2021) versus criterion referencing. Norm referencing in an OSCE sense adjusts the pass mark depending on the cohort scores as a whole (meaning that if some members of the cohort are particularly high achieving, the pass mark is raised disproportionally). While not technically norm referencing, when using the Angoff method, if the current student group comprises many high achievers or the experts are particularly stringent, this could influence the resulting ‘cut score’ (Cohen et al, 2013).

There is criticism of norm referencing: Onwudiegwu (2018) asserts that criterion referencing should be the norm, with students judged against a set standard – but offer no examples of this standard. By contrast, Dickter et al (2022) recommend using both, despite suggesting that criterion referencing led to higher standards.

The ultimate expression of criterion referencing might be seen in the Nursing and Midwifery Council’s introduction of the Test of Competence OSCE for international recruits (Bland, 2020), whereby the criteria are a didactic set of steps to be followed. Daniels and Pugh (2018) have suggested that checklists should be carefully constructed to avoid rewarding a rote approach, unless that is the desired outcome. If checklists are used, consideration should be given to making them polytomous (for example, they should be not done, attempted or done well), rather than dichotomous (that is, done or not done) (Pugh et al, 2016).

This brief consideration of the literature tends to support the assertion that there is no common consensus as to exactly what standard the fledgling practitioner should be able to achieve to successfully pass a physical examination skills module, beyond a woolly notion of ‘competence’, and that further exploration of this topic is needed. Further, despite multiple references to a ‘standard’, the attaining of this is, in all cases, a subjective decision. Criterion referencing may be fairer and more reliable when assessing student performance, with a checklist of tasks to be achieved, and perhaps also a weighting for certain key elements.

Pass marks

Module assessment using an OSCE seems to be reasonably standard practice across HEIs, although in my last HEI there was pressure to remove them (due to cost issues) and replace them with an in-practice element. Modules are either at level 6 (undergraduate degree level) or level 7 (master’s level). The mark needed to pass varies between institutions; some require 50% at level 6 and 70% at level 7, whereas others require 40% at level 6 and 50% at level 7.

There is consensus by discussion participants that this is too low and the 50% required for master’s level was little better. They cited institutional policy as the reason it was not higher and expressed dissatisfaction with the way they were forced to score the OSCE, noting that they were not ‘allowed’ to use any subjective impression – this meant the student could perform poorly by most standards but would pass if they achieved all the criteria.

There was general agreement from participants that a higher pass mark is needed, with some form of subjective assessment to remove the ‘box-ticking element’. An expressed desire to make the assessment more rigorous was verbalised, but a mechanism to achieve this could not be identified. It was commonly felt that:

  • 70% as the pass mark was reasonable;

  • More elements that are considered ‘essential to achieve’ should be included – failure to demonstrate these would be an automatic fail.

Caution must be exercised, however, at this point when considering the ‘curse of knowledge’ mentioned above. However, it is important to note that, under the system employed by some institutions, a level 6 student can omit/get wrong 60% of the OSCE and still pass. At master’s level, the requirement is to get 50% ‘correct’.

A cursory glance at the definition of a master’s level attainment and characteristics (for example, that given by The Quality Assurance Agency for Higher Education, 2020) suggests that this falls very short of the mark required for a master’s degree. Despite this, the evidence suggests that OSCEs are valuable (Patricio et al, 2013).

Defining competence

One element common to all is that the courses focused on examination skills and not the role of the practitioner, meaning that some degree of further development in practice would be required. Completion of the course was explicitly not a ‘sign off’ of competence to practice. It was generally accepted that employers and the students themselves may not be fully cognisant of that viewpoint, as it was not made explicit in most courses.

No party to the discussions held felt that completion of the course was a licence to practise. This viewpoint was perhaps best summed up by the expression that these skills were the ‘foundation’ on which future practice was built. I would agree with this point, with the caveat that foundations must be solid and of good quality if the structure is to rise and endure. Capability in the role is obtained through practice, but educational provision develops the competencies to achieve that capability. This can, of course, be interpreted in several ways but I see it to mean that completion of educational training provides the building blocks for practice to develop.

The ability to state with confidence that students who should not pass had not done so was sadly lacking across the HEIs where I have worked, and neither I nor the discussion participants could say with absolute confidence that no-one who should not have passed had done so.

Participants identified numerous occasions when faculty had identified that there were students who they wished they could have failed, but were unable to do so because the mechanism to manage this was not in place.

Roach (1992) made a widely used attempt at a definition of competence, alluding to an individual having (among other attributes) the skills and knowledge to fulfil one’s professional responsibilities. This is a reasonable attempt, but it does not pin down any specifics; this is a criticism that can be levelled at any attempt to define competence and representative of the difficulty of the task of defining such a vague concept (Kirkengen et al, 2013).

In my experience across the sector, explicitly articulating what is meant by competence is extremely challenging – and the inability to do so is, considering the potential implications of incompetence in this area, more than a little disconcerting. Benner famously proposed the journey from novice to expert (Benner, 1982) as an adaptation of the Dreyfus model of skills acquisition. While not without its critics or alternative theories - such as Duchscher’s stages of transition, or the transition shock model (Murray et al, 2019) - this has formed the basis of much theory since. No-one is suggesting that students should be expert after a single course but, surely, they should have progressed at least to advanced learner?

Perhaps Redfern et al’s (2002) definition of competence as the ability to do the job with desirable real-world outcomes has the most to offer – can the student do the job that is expected of them? Although >20 years old, that definition still holds real value. Whatever benchmark is decided on, examiners must, as Daniels and Pugh (2018) explained, have a shared model of the desired performance.

If that desired performance reflects practice, then a subjective element could also be introduced. For example, there is some weak evidence that the amount of empathy demonstrated during an OSCE can be predictive of clinical skill (Casas et al, 2017), and would move the assessment away from being one that is purely criterion based, without entering the realms of norm referencing that all discussion participants were keen to avoid.

This subjective element may be undesirable from a marking schema perspective; any subjective element is open to questions of bias and inter-rater reliability (Fuller et al, 2017) and is a potential minefield of discrepancy. Hope and Cameron (2015), for example, stated that examiners are more lenient at the start of an OSCE session than at the end, and the ‘hawk–dove’ effect (intra-examiner leniency versus stringency) is also well known (Finn et al, 2014). However, if we accept the premise that there is always going to be a subjective element to whether a mark is awarded or not, it is better to recognise and use it, rather than to falsely try to eliminate it. Its influence could be controlled and managed through the moderation process. Whichever method is used, it should be transparent and as rigorous as possible.

In a previous role as a programme lead for advanced practice MSc, if I was approached by a marking team with a borderline pass decision, I suggested they ask themselves whether they trusted the student to carry out the task set – namely, if, in clinical practice, the student went behind a curtain and examined a patient, would the examiner trust what the student reported to them? If the answer was “no”, the student failed; if “yes”, then a pass was awarded or the mark given. These discussions were recorded or documented and, in the few instances when the decision was challenged, they were considered robust enough – and, more importantly, focused sufficiently on patient safety – to resist that challenge.

Conclusion

When I first started examining OSCEs, I felt the student should perform to the same level as a newly qualified doctor. Based on the discussions above and many years examining students, I have modified this opinion, not because it is unreasonable but, rather, because it is unrealistic given the amount of contact time and students’ varied abilities.

An alternative standard or benchmark is offered instead: the examiner should be confident that they could send the student to examine a patient and trust that the findings from that examination were accurate. Of course, there is a subjective element in this, but it is at least marginally more concrete than just hoping for ‘competence’. One physical assessment course does not produce a competent practitioner, but it should produce a competent examiner of patients, with foundations they can use to move towards expert status.

The process by which institutions educating advanced practitioners arrive at their OSCE standards must be made more academically rigorous. To increase quality and maintain patient safety, consideration should be given to raising the pass mark or cut-off point, with some mandatory criteria also introduced, particularly around patient safety issues. Judicious weighting of criteria can be employed, but should not be solely relied on as the method by which a pass is granted. At the very least, the question we must ask ourselves when deciding whether to give the mark or not should be “Do I trust this person to do this?” If the answer is “yes”, they should be awarded the point.

Subjectivity is, while generally considered undesirable, inherently present in most marking practices. In the form of a judgement around entrustability, however, and with appropriate moderation mechanisms in place, it forms a valuable part of OSCE assessment.

Useful resources

Key points

  • Education provision to develop individuals into advanced clinical practice roles is now standardised at master’s level

  • Objective Structured Clinical Examination (OSCE) assessments are specific tests and increasingly used in advanced clinical practice education programmes

  • When designing marking criteria for Objective Structured Clinical Examinations, consideration must be given to what gains a mark

  • Objective Structured Clinical Examination marking schemes must be rigorous and defendable

  • Some ‘must-complete’ criteria should be introduced to increase quality and maintain patient safety

REFERENCES

Benner P (1982) From novice to expert. American Journal of Nursing; 82: 3, 402-407.

Bland J (2020) Introduction to the new Nursing OSCE Skill Stations. Nursing and Midwifery Council.

Casas RS et al (2017) Associations of medical student empathy with clinical competence. Patient Education and Counseling; 100: 4, 742-747.

Cohen ER et al (2013) Raising the bar: reassessing standards for procedural competence. Teaching and Learning in Medicine; 25: 1, 6-9.

Daniels VJ, Pugh D (2018) Twelve tips for developing an OSCE that measures what you want. Medical Teacher; 40: 12, 1208-1213.

Dickter DN et al (2022) Collaboration readiness: developing standards for interprofessional formative assessment. Journal of Professional Nursing; 42: 8-14.

Duvivier R, Veysey M (2016) Is the long case dead? “Uh, I don’t think so”: the Uh/Um Index. Medical Education; 50: 12, 1245–1248.

Dwyer T et al (2016) How to set the bar in competency-based medical education: standard setting after an Objective Structured Clinical Examination (OSCE). BMC Medical Education; 16: 1.

Finn Y et al (2014) Exploration of a possible relationship between examiner stringency and personality factors in clinical assessments: a pilot study. BMC Medical Education; 14: 1052.

Fuller R et al (2017) Managing extremes of assessor judgment within the OSCE. Medical Teacher; 39: 1, 58-66.

Harden RM et al (2015) The Definitive Guide to the OSCE: The Objective Structured Clinical Examination as a Performance Assessment. Elsevier.

Haring CM et al (2014) Student performance of the general physical examination in internal medicine: an observational study. BMC Medical Education; 14: 73.

Hope D, Cameron H (2015) Examiners are most lenient at the start of a two-day OSCE. Medical Teacher; 37: 1, 81-85.

Kirkengen AL et al (2013) What constitutes competence? That depends on the task. Scandinavian Journal of Primary Health Care; 31: 2, 65-66.

Ljungbeck B et al (2021) Content in nurse practitioner education: a scoping review. Nurse Education Today; 98: 104650.

Loh K-Y et al (2016) OSCE standard setting by borderline regression method in Taylor’s Clinical School. In: Tang SF, Logonnathan L (eds)Assessment for Learning Within and Beyond the Classroom: Taylor’s 8th Teaching and Learning Conference 2015 Proceedings. Springer.

Malau-Aduli BS et al (2017) A collaborative comparison of objective structured clinical examination (OSCE) standard setting methods at Australian medical schools. Medical Teacher; 39: 12, 1261–1267.

Murray M et al (2019) Benner’s model and Duchscher’s theory: providing the framework for understanding new graduate nurses’ transition to practice. Nurse Education in Practice; 34: 199-203.

NHS (2023) NHS Long Term Workforce Plan. NHS.

Onwudiegwu U (2018) OSCE: design, development and deployment. Journal of the West African College of Surgeons; 8: 1, 1-22.

Park SY et al (2021) Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea. Journal of Educational Evaluation for Health Professions; 18: 25.

Patrício MF et al (2013) Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Medical Teacher; 35: 6, 503-514.

Pugh D et al (2016) Done or almost done? Improving OSCE checklists to better capture performance in progress tests. Teaching and Learning in Medicine; 28: 4, 406–414.

Redfern S et al (2002) Assessing competence to practise in nursing: a review of the literature. Research Papers in Education; 17: 1, 51-77.

Taylor D, Quick S (2020) Students’ perceptions of a near-peer Objective Structured Clinical Examination (OSCE) in medical imaging. Radiography; 26: 1, 42-48.

The Quality Assurance Agency for Higher Education (2020) Characteristics Statement: Master’s Degree. QAA.

University Of Liverpool (nd) An introduction to clinical assessments (OSCEs): what is expected from an examiner? liverpool.ac.uk (accessed 10 October 2023).

Previous
Previous

Analysis: The evolving role of healthcare support workers

Next
Next

Soaring private healthcare use piling pressure on NHS GPs, survey finds