Libro blanco de las ACES Pediátricas 2024

Neumología Pediátrica. Anexos ❚ 557 European Curriculum Recommendations Objective structured clinical examination (OSCE) An OSCE is basically an organisational framework consisting of multiple stations around which students/trainees rotate, and at which they performand are assessed on specific tasks. Objective To assess competence at the level of shows how or simulation (the assessment instrument at this level of Miller’s pyramid). PRM scenario Specific communication challenges such as behavioural modification counselling (treatment adherence in a teenager with difficult asthma, smoking in a child with cystic fibrosis), clinical reasoning, diagnostic assessment skills such as lung function testing, or patient management skills. Method Many variants of OSCEs exist, e.g . 25–35 stations with a duration of 4.5min each (Dundee, UK), 20 stations with a duration of 10min each (Medical Council, Canada), and 16 stations with a duration of 9min each (Harvard Medical School, Boston, MA, USA). Establishing (content) validity in three steps • Identify the problems or conditions with which the candidate needs to be competent in dealing. • Define the tasks within the problems or conditions in which the candidate is expected to be competent. • Construct a blueprint or grid (i.e. define the sample of items to be included in the test). In its simplest form, this will consist of a two-dimensional matrix (one axis representing the competencies to be tested, the other axis representing the problems or conditions onwhich the competencies will be demonstrated). Determining and establishing reliability To achieve acceptable levels of reliability, OSCEs need to incorporatemeasures across a large number of cases or problems, and thus, if used alone, often have to be longer than is practicable. The use of checklist-basedmarkings may enhance inter-rater consistency in some OSCE stations ( e.g. practical and technical skills stations). For other stations, global ratings used by trained assessors may bemore appropriate ( e.g. communication skills stations and diagnostic task stations with alternative routes to the same outcome). Suggested references • Davis MH. OSCE: the Dundee experience. Med Teach 2003; 25: 255–261. • Adamo G. Simulated and standardized patients in OSCEs: achievements and challenges 1992-2003. Med Teach 2003; 25: 262–270. • Newble D. Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ 2004; 38: 199–203. Method The MSF can be performed using some commercially available packages or developed within the institution. The structure of the MSF formmay use tick boxes and free text fields; these aremostly combined. Free text answers have been validated in several studies as more useful than tick boxes and preformulated answers. The behaviours to be assessed should be described very clearly and the whole tool should be kept simple, with few items, and fit for purpose. 6–10 raters have been recommended for the process; however, it has also been shown that usingmore than 10 respondents increased reliability of the process. Thesemay be selected by the candidate or randomly selected by the initiator of this process. As there has been shown some variability of the assessments by staff on different levels, this should be taken into account during the selection of the responders. The responders should be instructed to work in a constructivemanner. Together with the evaluation of the individual skills of the candidate, there should be also some suggestions by the responders for the areas and ways of possible improvement. The respondents, as well as the trainee, should be familiar with the purpose of the process to avoid any misunderstandings. Anonymity of the responders must be strictly guaranteed. The evaluation of the results should be performed during a properly scheduled interview of the trainer and the trainee with enough time available, together with a comparison of the MSF results and candidate´s own views. The result of the process should be properly discussed and an action plan, aiming for an improvement of the areas with potential reserves, should be the outcome. Repetition of the MSF process after a given period of time provides information as to how the performance of the candidate has improved based on the previous process and the action plan. Suggested references • Wood L, Hassell A, Whitehouse A, et al . A literature review of multi-source feedback systems within and without health services, leading to 10 tips for their successful design. Med Teach 2006; 28: e185-e191. • Overeem K, Lombarts MJ, Arah OA, et al . Three methods of multi-source feedback compared: a plea for narrative comments and coworkers’ perspectives. Med Teach 2010; 32: 141–147. • Bullock AD, Hassell A, Markham WA, et al . How ratings vary by staff group in multi-source feedback assessment of junior doctors. 2009; 43: 516–520. • Burford B, Illing J, Kergon C, et al . User perceptions of multi-source feedback tools for junior doctors. 2010 Med Educ [Epub ahead of print DOI: 10.1111/j.1365-2923.2009.03565.x]

RkJQdWJsaXNoZXIy MTAwMjkz