Skip to main content
  • Research article
  • Open access
  • Published:

Assessing fitness-to-practice of overseas-trained health practitioners by Australian registration & accreditation bodies

Abstract

Background

Assessment of fitness-to-practice of health professionals trained overseas and who wish to practice in Australia is undertaken by a range of organisations. These organisations conduct assessments using a range of methods. However there is very little published about how these organisations conduct their assessments. The purpose of the current paper is to investigate the methods of assessment used by these organisations and the issues associated with conducting these assessments.

Methods

A series of semi-structured interviews was undertaken with a variety of organisations who undertake assessments of overseas-trained health professionals who wish to practice in Australia. Content analysis of the interviews was used to identify themes and patterns.

Results

Four themes were generated from the content analysis of the interviews: (1) assessing; (2) process; (3) examiners; and (4) cost-efficiency. The themes were interconnected and each theme also had a number of sub-themes.

Conclusions

The organisations who participated in the present study used a range of assessment methods to assess overseas trained health professionals. These organisations also highlighted a number of issues, particularly related to examiners and process issues, pre- and post-assessment. Organisations demonstrated an appreciation for ongoing review of their assessment processes and incorporating evidence from the literature to inform their processes and assessment development.

Background

Assessment of fitness-to-practice in a jurisdiction is commonplace where a person trained in one country wishes to practice in another. These assessments take many forms and are designed to assess the competency or the capability of the practitioner. The overarching role of the assessment is to protect the public from practitioners who are not competent [13].

In Australia, the assessment of overseas trained health professionals who wish to practice in Australia (and in some cases New Zealand) rests with the health professional accreditation bodies or professional associations. In the case of the professional accreditation bodies, this role is assigned by regulators under the Health Practitioner Regulation National Law Act (2009) [4]. Each accreditation body is also often charged with the responsibility of assessing the suitability of pre-registration university programs for that profession.

The assessments undertaken by these bodies varies depending on the competencies and capabilities set out for that profession, with the methods of assessment chosen to ensure that a range of these competencies and capabilities are assessed [3, 5], often in multiple ways. It may be however, that the assessments measure those competencies or capabilities that are easily assessed, and omit those that are not [6]. The purpose of these assessments is to protect patients. Therefore, the methods chosen to assess candidates should be reliable, valid, feasible and acceptable [7, 8]. In this way patients are exposed only to practitioners deemed competent to practice in that profession. In addition, it is also important that assessment methods are continuously reviewed as part of quality assurance processes [3].

Whilst very little has been published on the methods of assessment [8] and issues surrounding the assessment of overseas trained health practitioners in Australia, there has been some discussion of the political and workforce issues (e.g., complex procedures, direct and indirect discrimination, poor provision of information) surrounding international medical graduates wishing to practice in Australia [913]. There are examples throughout the literature of assessment methods in licensing exams.

Pharmacists seeking to practice in Ontario, Canada undertake a Prior Learning Assessment (PLA) that assesses the candidates learning through both formal and informal education [14]. The PLA used in this context is both an assessment of transcripts, portfolios etc. and performance in an Objective Structures Clinical Examination (OSCE).

The OSCE format is used widely in fitness-to-practice assessments. Austin et al. [15] have described the development of the OSCE for pharmacy graduates with Munoz et al. [16] presenting data on the reliability, validity and generalisability of the examination. Austin et al. [14] suggests that standardised English-language proficiency tests (e.g. IELTS) may not be appropriate for cultural competency and communication skills required for pharmacy practice, even though this is a particularly important criteria where there is diversity in the candiates’ English language proficiency [17].

This suggests that assessment of English language proficiency should form part of the assessment process, and organisations should not rely solely on standardised English-language tests. In addition to English-language assessment, Archer [18] suggests that assessment of psychosocial skills form part of a licensing examination. These communication and psychosocial skill issues are quite relevant, as Tamblyn et al. [19] have demonstrated that patient-practitioner communication and clinical decision making during fitness-to-practice assessments correlate with complaints to professional regulation bodies.

The aim of the current paper is to identify the methods of assessment used by those bodies that undertake the assessment of overseas trained health professionals in Australia, and to identify the issues surrounding these assessments and how these issues are managed.

Methods

Study design

Semi-structured interviews were used to explore how Australian health professional assessment bodies assess overseas-trained practitioners. An interview schedule (Table 1) was developed based on the findings of a systematic search and critical review of the health professional assessment literature and preliminary information collected from the websites of these bodies. A semi-structured format was chosen so that information could be gathered on specific areas of interest (e.g., structure of assessment framework) while still providing participants with the opportunity to describe their unique experiences associated with assessment.

Table 1 Interview schedule

The study was approved by the Victoria University Faculty of Health, Engineering and Science Human Research Ethics Committee.

Participants

Thirteen (N=13) professional bodies who assess fitness-to-practice of overseas trained health professionals were approached to participate in interviews exploring current practices in the assessment of overseas-trained health professionals wishing to practice in Australia.

The interviews were conducted by a researcher (VS) experienced in conducting interviews for research.

Data collection

All interviews were audio-recorded and transcribed verbatim. Notes were taken during interviews to include any relevant non-verbal cues and to assist with data transcription (e.g., when the quality of the recording was compromised by background noise). Participants were sent a copy of their transcribed interviews and were asked to make any necessary changes (e.g., if the researcher had misheard a statement) and/or add any additional comments.

Data analysis

Utilising NVivo (QSR International, Victoria, Australia) a primarily directed approach to content analysis was used to select and focus data from transcriptions and notes [20, 21]. Based on previous research [20, 22] some themes were set prior to conducting interviews allowing for semi-structured guidelines to be developed. Nonetheless, as the research was also investigating a relatively sparse area of research, a conventional or inductive content analysis approach was also used to identify any additional themes and categories that emerged [22, 23].

Results

Eleven (N=11) organisations agreed to participate (Table 2) with one or two representatives taking part in each interview. Representatives of the organisations were the chief executives and/or those in charge of the development, implementation and conduct of the assessment process.

Table 2 Participating organisations

By paying particular attention to patterns, regularities, irregularities and propositions within the data [21, 24, 25] four interconnected categories were generated from the interview data with each theme also generating first- and second-order themes.

Assessing (Theme 1)

Professional bodies used a variety of assessment strategies to assess whether an overseas-trained practitioner was eligible to be registered to practice in Australia (Figure 1).

Figure 1
figure 1

Sub-themes identified within the Assessing theme.

Desktop assessment (Theme 1.1.1.1) was used in two ways. In the first instance bodies used the assessment as an initial step in the process where candidates were required to provide evidence and information regarding their eligibility to take part in the assessment process. For some bodies the desktop assessment was the sole assessment tool. Assessing a candidate’s training was identified as a very important step:

" …because can’t assess everything that person needs, should be knowing as a practitioner, you have to rely on their training to have given them some of it. I just don’t think that competence-only assessment is either affordable or realistic "

Although a traditional unstructured Viva voce (Theme 1.1.1.2) or oral examination was not common practice, it was not unusual for clinical examinations to include some form of verbal questioning to assess candidate’s clinical reasoning or to assess performance criteria that may not have been covered as part of the clinical examination.

Short-answer (Theme 1.1.1.3) questions were predominantly used in conjunction with multiple choice question (MCQ) assessments and were often based on clinical scenarios. The Essay (Theme 1.1.1.4) or long-answer question was used infrequently and only as part of a multi-format written assessment. An MCQ (Theme 1.1.1.5) examination was commonly used as the first step in the assessment process to assess theoretical, basic science and/or clinical knowledge. Vignettes or situational questions were often used rather than those assessing knowledge of facts as this method had been found to be better at assessing areas such as clinical reasoning, judgement and diagnosis. This type of question was also reported to be more discriminating than other types of question:

" When it was factual recall they were scoring around 40% plus correct, soon as you put in a vignette they dropped below 25%. "

Clinical capability (Theme 1.1.2) was assessed using different methods. The OSCE (Theme 1.1.2.1) was often used as a method of assessing clinical skills (e.g. history taking, after care) and clinical reasoning. The stations did not necessarily use real or standardised patients. The Long-case (Theme 1.1.2.2) was commonly used to assess the performance of clinical skills. Patient selection strategies ranged from a purposeful selection of patients based on age and/or medical condition to accepting walk-in patients.

Risk management was a primary concern for most organisations, particularly for those professions where the potential risk to patients was high. Issues and solutions were identified from a variety of aspects including minimising risk to the community from practising health professionals, decreasing the risk of harm to those participating in examinations, reducing harm to the candidate by placing them in a situation they are ill-equipped to handle, considering the candidate’s potential impact on colleagues and the risk of candidate’s appealing or engaging in legal action because of the assessment outcome:

" … however it is done at the end of the day if you are putting your stamp on them you have to know that they can deliver, and do no harm. "

During assessment (Theme 1.2.2.2), the main risk-management processes risks were identified as those associated with stringency of evidence confirmation, vigilance when assessing areas where harm can be caused, requiring a demonstration of clinical skills and competency, training assessors in policy and process to follow if model/patient is at risk, transparency to candidate in terms of safety performance indicators, policy and procedures and running examinations well using good staff.

The assessment of cultural competency was increasingly important for most professional bodies and was seen as a complex issue to assess:

" … the cultural competence has become more of an issue … beyond communication you have to have an understanding of the culture of the individual you are dealing with and I don’t think we are at that stage yet … "

The main focus identified in the area of cultural competency was the ability of candidates to treat patients from culturally and linguistically diverse backgrounds, including indigenous Australians and/or patients of different ages. Cultural competency was sometimes assessed in written and clinical examinations by presenting scenarios that included cultural aspects. In examinations where real patients were included, most organisations screened the patients and did not include those with language difficulties that required an interpreter.

Another issue raised in terms of cultural competency was the possibility of culturally-influenced responses. Caution was advised when developing assessments so that candidates from different cultures are not mis-cued. Concern was also expressed about the inability of candidates to gain employment without an understanding of the Australian workplace culture (e.g., allied health support staff), particularly if competing against Australian candidates for positions.

Assessing a candidate’s communication skills (Theme 1.3.2) could include their interaction with patients, other professionals, patients’ families and particular professional environments. Some respondents noted that communication was assessed throughout the examination process, as without adequate communication skills the candidate could not complete the required tasks. Others specifically assessed communication skills, including building rapport, listening skills and sensitivity to client and information gathered.

For some professional bodies communication was seen as ‘…stuff that is not easy for us to assess’. One concern was that written assessments (e.g., portfolios, MCQs) or simulation-style assessments did not allow observation of candidates’ communication skills in areas such as relationship-building with the client. One strategy to overcome this in portfolio assessment was to ask the candidates to submit a video of a treatment session or consultation. Another solution identified was a change to workplace assessment.

Knowledge of the Australian health system (Theme 1.33) was considered from two perspectives. The first was the effect on patients and colleagues resulting from a practitioner’s lack of knowledge of the system. The second was the effect on the candidate’s performance in assessment tasks due to their lack of knowledge. Of particular concern were areas such as occupational health and safety, ethical issues and systems and processes. One problem identified was differences between the states

" … you are constructing those questions you then felt the problems between the states, and those differences you might think you have got a terrific question and then someone from SA [South Australia] will say no it’s totally different in NSW [New South Wales]. That is where you end up with a big problem. "

One solution suggested was a consensus between the states or a national standard. Although some organisations did not directly assess knowledge of the Australian health system they provided information on the system to candidates through publications or guest speakers. Another recommendation for candidates was to spend some time observing in an Australian clinical setting. This was seen as a very effective learning experience for candidates who were struggling in the area.

Processing (Theme 2)

For each organisation the assessment of overseas-trained professionals was a process that was continually being reviewed with the aim of increasing its efficiency and effectiveness (Figure 2).

Figure 2
figure 2

Sub-themes identified within the Processing theme.

Decisions about a candidate’s eligibility to participate in the assessment process were based on a number of criteria. Satisfactory completion of Courses and qualifications (Theme 2.1.1.1) was a common criterion. Prior to entry to any examination, most organisations required candidates to demonstrate that they had successfully completed an approved course that was deemed as equivalent to studies in Australia. In some cases courses were required to be approved by specific councils or organisations in the country and in other cases courses from particular countries were deemed acceptable.

Another strategy used was to recognise equivalent Examination or accreditation systems (Theme 2.1.1.2) coupled with at least 12 months of clinical practice in a particular environment. Examination and accreditation systems were accredited on the basis of documented quality and similarity in structure and standard to Australian systems.

Professional bodies required candidates to complete English language qualifications (Theme 2.1.1.3), either the Occupational English Test (OET) or the International English Language Testing System (IELTS). Candidates were required to, or would be in the near future, gain at least a B in the OET or a 7 on the IELTS in all sections in one sitting.

Several professional bodies conducted their written examination at both on-shore and off-shore locations (Theme 2.1.2). Factors taken into consideration in adopting this strategy were convenience for candidates (i.e., not having to travel to Australia) and cost to the organisation. Several professional bodies that were unable to conduct stand-alone testing sessions due to candidate numbers linked with other Australian professional bodies to run combined sessions. The examinations were conducted through experienced and reliable overseas venues:

" So the offshore is conducted through a clearing house … they do all the arrangements with the off-shore venues for about 6 professions, because you know we might only have one [candidate] in Tehran but there might be two [omitted – other allied health professionals] and three [omitted – other allied health professionals] so then we’re not all paying for invigilators and things like that … I guess there are venues in most places around the world … "

There was some variation in what Post-assessment information was provided to candidates about their performance on the assessment (Theme 2.1.3.2). Many professional bodies provided candidates who fail with information on the areas of the assessment they needed to improve in. Some only provided this information if the candidate appealed:

" The advice from DEEWR [Department of Education, Employment & Workplace Relations] is no. Very simple and clear. On appeal we usually do, then explain a bit more clearly. We will say to them you need to do the following … "

" … [examiner] gives recommendations to the candidate and then we pass those on … has no problem with the candidates ringing and talking to them about the assessment and asking advice and things like that … "

One organisation noted that cultural expectations need to be considered in communicating results:

" You know in Australia we tend to sugar-coat bad news … we’ve done away with it, successful or not successful, or you know, suitable or not suitable, because even though it is not an examination process as such, it’s an assessment. Culturally, people want to hear did I pass or fail? "

Examination development (Theme 2.1.4) was a long and costly commitment for professional bodies and required ongoing review. Some strategies of development included use of expert panels, sharing with similar overseas professional bodies and sharing information with Australian educational institutions. Questions were occasionally trialled by being included within scheduled examinations for Australian pre-registration programmes and by practising professionals at events such as conferences. Including trial questions as part of scheduled examinations was generally seen as the most efficient method, as it could be difficult and costly to encourage students and professionals to participate with commitment in trialling questions:

" We thought that paying them [final year students] and telling them how important it was would be enough for them to take it seriously but the exam was 3 hour duration and we made them stay for 1½ hours but we could tell that some of them didn’t - it didn’t really work - didn’t give it a really good go. "

Even so, using graduating students was seen as desirable as often the passing score for the examination was based on the graduating Australian student level.

The basis for appeals (Theme 2.2.2) appeared to be both procedural or related to candidate impairment (e.g., feeling unwell). Some organisations only considered procedural appeals that asserted that the assessment process had been defective. Professional bodies worked hard to minimise the frequency of appeals by creating a comprehensive assessment blueprint linked to professional standards; having transparent processes; following guidelines in areas such as patient selection; and monitoring candidate performance by methods such as videotaping clinical assessments or recording key strokes in computerised examinations.

Review of assessment processes (Theme 2.3) tended to be ongoing with assessors encouraged to provide feedback and assessment data analysed after each examination. Some organisations were starting to include an analysis of examiner performance in their review. Another, less common, internal review strategy was to survey candidates on their assessment experience. In most instances reviews were conducted by a committee.

Professional bodies were asked to identify the main strengths (Table 3) and weaknesses (Table 4) of their processes. They were also asked to discuss any changes planned for future implementation or changes they would like to implement (Table 5).

Table 3 Strengths of the assessment processes
Table 4 Weakness of the assessment processes
Table 5 Possible changes to the assessment processes

Examining (Theme 3)

Commonly the minimum standard (Theme 3.1.1, Figure 3) required of candidates was that of an entry-level graduate who became eligible to register as a practitioner in Australia by completing the necessary education and clinical requirements. One professional body explained the reasoning behind this decision:

" So the competency based occupational standards were developed … all universities are accredited against those standards and so all people coming in from overseas against those standards. "

Figure 3
figure 3

Sub-themes identified within the Examining theme.

In specific assessments systems, standards were generally set according to professional standards. One strategy to set criteria in clinical assessment was to use a panel of experts to make decisions. Several professional bodies required candidates to gain a least 75% for their clinical examination with some parts of the assessments being hurdle requirements.

Strategies (Theme 3.1.2) used in marking candidates in clinical examinations included using rubrics with checklists and/or rating scales when assessing candidates. Checklists were used for assessors to note whether or not participants had performed certain components or achieved defined performance indicators. For those that used checklists there was a certain amount of assessor discretion allowed in terms of marking areas that were not included due to the circumstances of the examination.

Assessor judgement was also used when rating. Candidates were marked according to the standard of their performance but an overall pass or fail decision was made at the completion of the clinical assessment. It was noted that this strategy was not based on any statistical grounds and may be reviewed.

Borderline fail candidates (Theme 3.1.3) were identified as an issue in assessment by several professional bodies. One strategy used to address borderline candidates was to offer them a supplementary examination. This might mean that the candidate then underwent further testing in a specific area or re-sat the whole clinical examination. Borderline portfolio or desktop assessments were referred to a senior staff member and the application discussed. If a decision could not be reached, it was referred to the assessment committee or another higher-level staff member. On most occasions, if insufficient evidence is found there is an option for candidates to provide further evidence (e.g., complete a specific course) within an extended period rather than re-applying in full.

Selection of assessors (Theme 3.2.1) was based on their experience not just in the profession but also in educational assessment as well:

" … again as many people did we are relying heavily on the fact that these people are “trained” when they come to us, trained by the institutions that use them. We then just have to reorient them to the nature of the assessment … "

Assessors from an education background are seen as advantageous not just because of their skills but also because they have experience with entry-level practitioners.

Training of assessors (Theme 3.2.2) was noted as an important issue in assessment. Methods of training included formal government-based sessions in recognising fraudulent documents, ‘calibration sessions’ with other assessors to share ideas, working with experienced assessors, handbooks and pre-session briefings. One professional body indicated that reviewing specific assessment cases with assessors had led to more open communication when assessors were unsure of decisions and that this had been highly beneficial. The benefit of a workshop-style approach was highlighted by another organisation because of the opportunity for ideas to be shared. They noted that while rigorous training sessions had worked well for new examiners, established examiners did not find them helpful. They had moved to having new assessors observe the examination, be monitored during their first assessments and then be subject to the same continual auditing as all assessors were in this organisation.

Cost-efficiency (Theme 4)

The cost of examinations (Figure 4) could vary from year to year based on the demand for assessment. Several professional bodies indicated that the number of applications was affected by issues such as the world economic situation and/or movement to another assessment system, such as in New Zealand.

Figure 4
figure 4

Sub-themes identified within the Cost-efficiency theme.

The cost of assessment of overseas-trained practitioners was a significant part of the annual budget for most professional bodies. Although candidate fees contributed to the costs, in most organisations the profession subsidised the costs; the institution running the examinations was ‘extremely kind to us and don’t actually charge us, you know, the full amount’ and/or many professionals voluntarily gave their time. Fees were set at a level that made the examination feasible but not too expensive for candidates.

It was also important for examinations to be efficient in use of other resources. For large candidate numbers, written examinations using MCQs met this criterion. For most professional bodies, however, clinical assessment was resource intensive. A major human resource consideration was the administrative staff that worked to organise examinations and disseminate information to candidates. Organisations considering the introduction of examinations were concerned about the resources required to do this.

Discussion

The purpose of the present study was to investigate the methods of assessment of overseas-trained health professionals who wish to practice in Australia. The main themes identified in the analysis of responses from the interviews of Australian assessment bodies were Assessing, Processes, Examining and Cost-efficiency. Within each theme, multiple levels of sub-themes were also identified.

Assessing

Under the theme Assessing the main sub-themes related to types of assessment and risk management. Assessment bodies use a range of assessment tools when assessing fitness-to-practice, something which in the Australian medical profession is consistent with other countries [8]. In the initial stages, these bodies utilised a desktop assessment process to either screen candidates (ensuring they meet the standard to sit the examination process) or as the sole assessment process. Multiple assessment methods were used to ensure content validity and ensure that candidates were assessed on all competencies and capabilities deemed to be important and relevant for that profession. It may also be that, as Finucane et al. [8] suggest, there is no single assessment method that is suitable to assess fitness-to-practice. Given that it is a high-stakes assessment, decisions about fitness-to-practice should be based on a multitude of information sources [26].

The assessment methods employed by these organisations ranged from short-answer questions and MCQs (testing basic science and theoretical knowledge) [27, 28] to the OSCE [15, 29] and long case assessment [27, 30] (for assessment of clinical capability). In addition portfolio assessments [31] were increasingly used to assess candidates, and a number of organisations indicated that this is an assessment method that may be used at a later date.

Not surprisingly, risk management was a primary concern for most organisations, where different types of risk and associated risk mitigation strategies were discussed. Risks were identified in a number of areas including minimisation of harm to the community and decreasing the risk of harm to patients (or standardised patients) during the examination process. Ensuring that candidates were equipped to cope with workplace based assessments and the environment in which these assessments are conducted was a further concern.

When designing assessments, the bodies identified a number of areas that presented challenges. These included the assessment of cultural competence, communication, knowledge of the Australian health system and after-care/follow-up of the patient. Both cultural competence [32, 33] and communication [34, 35] have been previously identified within the medical and health education literature as areas that are difficult to assess in a reliable and valid way. In relation to communication, Tamblyn et al. [19] suggest that a cut-off or minimum score be set for communication components of the assessment process in an effort to reduce the number of complaints to professional regulatory bodies. In addition, whilst the organisations interviewed did recognise the importance of cultural competency assessment, particularly related to indigenous Australians, many were only just incorporating, or anticipating incorporating, this area into the assessment processes the organisation used.

Processes

Under Processes the main sub-themes were procedures for conducting assessment, appeal processes, review processes and reflection on the strengths and weaknesses of their systems. Importantly, most organisations reported that their assessment processes and examiners were being continually reviewed. Information about the assessment process, including sample questions, marking criteria etc. were provided to the candidate prior to the assessment process. Written assessments were largely undertaken off-shore, that is, not in Australia. Caution was advised when running examinations off-shore as it was a complex task to have the correct candidate at the correct site sitting the correct examination and receiving the correct results. The organisational difficulty was increased with larger numbers of candidates.

Post-assessment issues including candidate feedback and appeals were also canvassed. Most organisations provided feedback to candidates who failed an element(s) of the assessment process and this assisted in clarifying areas in which the candidate needs to improve and also minimise appeals, ensuring the process is fair and transparent [8]. Organisations were understandably keen on minimising appeals, and in most cases appeals were only available to the candidate if an examination process issue was identified. These steps to minimise appeals would also have an impact on making the assessment defensible from a legal standpoint. Although this was not articulated by participants in the current study, previous research has indicated this is a concern for such organisations [8], and minimised by the use of valid and reliable assessment strategies. When asked to reflect on their processes, organisations identified numerous strengths and weaknesses, and these organisations also presented planned or potential changes to their assessment processes.

Examining

Within the theme Examining, there were two sub-themes related to marking (including processes for those who fail) and assessors (selection and training). Marking was undertaken using checklists [36], ensuring that candidates performed required elements, however there was little discussion of the use of global or holistic assessments. The use of holistic assessments is becoming widely reported in the literature as a valid and reliable assessment outcome [3740], although it appears that this has yet to make its way into the assessments undertaken by the organisations. The use of global assessments has been demonstrated to improve the reliability and validity of an assessment, particularly where communication skills are assessed [41, 42].

Most organisations spent large amounts of time and money on their examiners, in terms of training, recruitment and payments to assess. Examiners were typically selected based on clinical experience as well as experience in education [4345], which ensures that it is a peer assessment process [8]. Formal training sessions were often undertaken, and new assessors were paired with more experienced assessors, to aid in their development. Examiner training is widely accepted to improve the reliability of an assessment as well as improve examiner confidence in the assessment process [43, 4649]. All examiners were the subject of ongoing auditing and assessment, and therefore “…remain competent in what they do” [8]. Where organisations did not have formal examiner training processes in place, it was anticipated that they were to implement a process in the near future.

Cost-efficiency

The range of practices reported under the theme Cost-efficiency was relatively limited. The size of the organisation had an impact on the financial elements of the assessment process; large organisations were able to make money on their examinations and use this to further develop their processes, smaller organisations often charged candidates the ‘cost’ of conducting the assessment, leaving them with very little in the way of financial resources to develop their assessment processes. Whilst clinical assessments were labour and cost intensive, particularly in relation to the administration of the assessment, organisations did not perceive this to be a major issue.

Conclusions

Most of the organisations who participated in the current study have invested large amounts of time and resources, both financial and administrative, into the development and ongoing review of their assessment processes. Whilst many organisations are utilising assessment methods they have employed for a number of years, there was recognition that ‘newer’ assessment types such as the portfolio may be useful in the overall assessment process. The assessment methods were often chosen based on the resources available to that organisation (i.e. MCQ for medicine). Most processes include multiple assessment methods, with these methods blueprinted to assess a range of competencies and capabilities for that profession. Overall, the organisations interviewed provided an impression of the use of the literature to inform their assessment processes and the use of robust, defensible assessments of overseas-trained health professionals who wish to practice in Australia.

References

  1. Epstein RM: Assessment in medical education. N Engl J Med. 2007, 356: 387-396. 10.1056/NEJMra054784.

    Article  Google Scholar 

  2. Govaerts MJB, van der Vleuten CPM, Schuwirth LWT: Optimising the reproducibility of a performance-based assessment test in midwifery education. Adv Health Sci Educ. 2002, 7: 133-145. 10.1023/A:1015720302925.

    Article  Google Scholar 

  3. Brailovsky CA, Grand’Maison P: Using evidence to improve evaluation: A comprehensive psychometric assessment of a SP-based OSCE licensing examination. Adv Health Sci Educ. 2000, 5: 207-219. 10.1023/A:1009869328173.

    Article  Google Scholar 

  4. Health Practitioner Regulation National Law Act 2009: http://www.ahpra.gov.au/Legislation-and-Publications/Legislation.aspx.

  5. Holmboe ES, Hawkins RE: Methods for evaluating the clinical competence of residents in internal medicine: A review. Ann Intern Med. 1998, 129: 42-48.

    Article  Google Scholar 

  6. Van der Vleuten C: National, European licensing examinations or none at all?. Med Teach. 2009, 31: 189-191. 10.1080/01421590902741171.

    Article  Google Scholar 

  7. Hays RB, Davies HA, Beard JD, Caldon LJM, Farmer EA, Finucane PM, McCrorie P, Newble DI, Schuwirth LWT, Sibbald GR: Selecting performance assessment methods for experienced physicians. Med Educ. 2002, 36: 910-917. 10.1046/j.1365-2923.2002.01307.x.

    Article  Google Scholar 

  8. Finucane P, Bourgeois-Law G, Ineson S, Kaigas T: A comparison of performance assessment programs for medical practitioners in Canada, Australia, New Zealand, and the United Kingdom. Acad Med. 2003, 78: 837-843. 10.1097/00001888-200308000-00020.

    Article  Google Scholar 

  9. McLean R, Bennett J: Nationally consistent assessment of international medical graduates. Med J Aust. 2008, 188: 464-468.

    Google Scholar 

  10. McGrath BP: Integration of overseas-trained doctors into the Australian medical workforce. Med J Aust. 2004, 181: 640-642.

    Google Scholar 

  11. McGrath P, Wong A, Holewa H: Canadian and Australian licensing policies for international medical graduates: a web based comparison. Educ Health. 2011, 24: 1-13.

    Google Scholar 

  12. Douglas S: The registration and accreditation of international medical graduates in Australia - A broken system or a work in progress?. People Place. 2008, 16: 28-40.

    Google Scholar 

  13. Groutsis D: Geography and credentialism: the assessment and accreditation of overseas-trained doctors. Health Sociol Rev. 2006, 15: 59-70. 10.5172/hesr.2006.15.1.59.

    Article  Google Scholar 

  14. Austin Z, Galli M, Diamantouros A: Development of a prior learning assessment for pharmacists seeking licensure in Canada. Pharm Educ. 2003, 3: 87-96. 10.1080/1560221031000151633.

    Article  Google Scholar 

  15. Austin Z, O’Byrne C, Pugsley J, Quero Munoz L: Development and validation processes for an Objective Structured Clinical Examination (OSCE) for entry-to-practice certification in pharmacy: The Canadian experience. Am J Pharm Educ. 2003, 67: 1-8.

    Article  Google Scholar 

  16. Quero Munoz L, O’Byrne C, Pugsley J, Austin Z: Reliability, validity, and generalizability of an objective structured clinical examination (OSCE) for assessment of entry-to-practice in pharmacy. Pharm Educ. 2005, 5: 33-43. 10.1080/15602210400025347.

    Article  Google Scholar 

  17. Rothman A, Cusimano M: Assessment of English language proficiency in international medical graduates by physician examiners and standardised patients. Med Educ. 2001, 35: 762-766. 10.1046/j.1365-2923.2001.00964.x.

    Article  Google Scholar 

  18. Archer J: European licensing examinations - The only way forward. Med Teach. 2009, 31: 215-216. 10.1080/01421590902741148.

    Article  Google Scholar 

  19. Tamblyn R, Abrahamowicz M, Dauphinee D, Wenghofer E, Jacques A, Klass D, Smee S, Blackmore D, Winslade N, Girard N, et al: Physician scores on national clinical skills examinations as predictors of complaints to medical regulatory authorities. J Am Med Assoc. 2007, 298: 993-1001. 10.1001/jama.298.9.993.

    Article  Google Scholar 

  20. Biddle SJH, Markland D, Gilbourne D, Chatzisarantis NLD, Sparkes AC: Research methods in sport and exercise psychology: Quantitative and qualitative issues. J Sports Sci. 2001, 19: 777-809. 10.1080/026404101317015438.

    Article  Google Scholar 

  21. Wolcott HF: Writing up qualitative research. 2009, USA: Sage Publications, 3

    Book  Google Scholar 

  22. Hsieh H-F, Shannon SE: Three Approaches to Qualitative Content Analysis. Qual Health Res. 2005, 15: 1277-1288. 10.1177/1049732305276687.

    Article  Google Scholar 

  23. Scanlan T, Ravizza K, Stein G: An in-depth study offormer elite figure skaters:I. Introduction to the project. J Sport Ex Psych. 1989, 11: 54-64.

    Google Scholar 

  24. Harrell MC, Bradley MA: Data collection methods: Semi-structured interviews and focus groups. 2009, Santa Monica: RAND Corporation

    Google Scholar 

  25. Miles MB, Huberman AM: An expanded source book: Qualitative data anlysis. 1994, California, USA: Sage Publications, 2

    Google Scholar 

  26. Schuwirth L: The need for national licensing examinations. Med Educ. 2007, 41: 1022-1023. 10.1111/j.1365-2923.2007.02856.x.

    Article  Google Scholar 

  27. Wass V, van der Vleuten CPM, Shatzer J, Jones R: Assessment of clinical competence. Lancet. 2001, 357: 945-949. 10.1016/S0140-6736(00)04221-5.

    Article  Google Scholar 

  28. Elstein AS: Beyond multiple-choice questions and essays: The need for a new way to assess clinical competence. J Med Educ. 1993, 68: 244-249.

    Google Scholar 

  29. Grand’Maison P, Lescop J, Rainsberry P, Brailovsky CA: Large-scale use of an objective, structured clinical examination for licensing family physicians. Can Med Assoc J. 1992, 146: 1735-1740.

    Google Scholar 

  30. Hatcher S, Handrinos D, Jenkins K: How and why the long case should be kept: A view from the antipodes. Psych Bull. 2008, 32: 151-152.

    Article  Google Scholar 

  31. Wilkinson TJ, Challis M, Hobma SO, Newble DI, Parboosingh JT, Sibbald RG, Wakeford R: The use of portfolios for assessment of the competence and performance of doctors in practice. Med Educ. 2002, 36: 918-924. 10.1046/j.1365-2923.2002.01312.x.

    Article  Google Scholar 

  32. Gregorczyk SM, Bailit HL: Assessing the cultural competency of dental students and residents. J Dent Educ. 2008, 72: 1122-1127.

    Google Scholar 

  33. Miller E, Green AR: Student reflections on learning cross-cultural skills through a ‘cultural competence’ OSCE. Med Teach. 2007, 29: 76-84. 10.1080/01421590701266701.

    Article  Google Scholar 

  34. Beaulieu MD, Rivard M, Hudon E, Saucier D, Remondin M, Favreau R: Using standardized patients to measure professional performance of physicians. Int J Qual Health Care. 2003, 15: 251-259. 10.1093/intqhc/mzg037.

    Article  Google Scholar 

  35. Tompkins M, Paquetter-Frenette D: Learning portfolio models in health regulatory colleges of Ontario, Canada. J Contin Educ Health Prof. 2010, 30: 57-64. 10.1002/chp.20057.

    Article  Google Scholar 

  36. Sadler DR: Indeterminancy in the use of preset criteria for assessment and grading. Assess Eval Higher Educ. 2009, 34: 159-179. 10.1080/02602930801956059.

    Article  Google Scholar 

  37. Cunnington J, Neville A, Norman G: The risks of thoroughness: reliability and validity of global ratings and checklists in an OSCE. Adv Health Sci Educ. 1997, 1: 227-233.

    Article  Google Scholar 

  38. Cohen R, Rotham A, Poldre P, Ross J: Validity and generalizability of global ratings in an Objective Structured Clinical Examination. Acad Med. 1991, 66: 545-548. 10.1097/00001888-199109000-00012.

    Article  Google Scholar 

  39. McKinley R, Strand J, Ward L, Gray T, Alun-Jones T, Miller H: Checklists for assessment and certification of clinical procedural skills omit essential competencies: a systematic review. Med Educ. 2008, 42: 338-349. 10.1111/j.1365-2923.2007.02970.x.

    Article  Google Scholar 

  40. McKinley RK, Strand J, Gray T, Schuwirth L, Alun-Jones T, Miller H: Development of a tool to support holistic generic assessment of clinical procedure skills. Med Educ. 2008, 42: 619-627. 10.1111/j.1365-2923.2008.03023.x.

    Article  Google Scholar 

  41. Newble DI: Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004, 38: 199-203. 10.1111/j.1365-2923.2004.01755.x.

    Article  Google Scholar 

  42. Driessen E: Portfolio critics: Do they have a point?. Med Teach. 2009, 31: 279-281. 10.1080/01421590902803104.

    Article  Google Scholar 

  43. Boulet JR: Summative assessment in medicine: The promise of simulation for high-stakes evaluation. Acad Emerg Med. 2008, 15: 1017-1024. 10.1111/j.1553-2712.2008.00228.x.

    Article  Google Scholar 

  44. Hays RB: Assessment of general practice consultations: Content validity of a rating scale. Med Educ. 1990, 24: 110-116. 10.1111/j.1365-2923.1990.tb02508.x.

    Article  Google Scholar 

  45. Hays RB, Jones BF, Adkins PB, McKain PJ: Analysis of videotaped consultations to certify competence. Med J Aust. 1990, 152: 609-611.

    Google Scholar 

  46. Awaisu A, Mohamed MHN, Al-Efan QAM: Perception of pharmacy students in Malaysia on the use of objective structured clinical examinations to evaluate competence. Am J Pharm Educ. 2007, 71: 118-118. 10.5688/aj7106118.

    Article  Google Scholar 

  47. van Zanten M, Boulet JR, McKinley D: Using standardized patients to assess the interpersonal skills of physicians: Six years’ experience with a high-stakes certification examination. Health Communic. 2007, 22: 195-205. 10.1080/10410230701626562.

    Article  Google Scholar 

  48. Van Nuland M, Van Den Noortgate W, Degryse J, Goedhuys J: Comparison of two instruments for assessing communication skills in a general practice objective structured clinical examination. Med Educ. 2007, 41: 676-683. 10.1111/j.1365-2923.2007.02788.x.

    Article  Google Scholar 

  49. Cooper C, Mira M: Who should assess medical students’ communication skills: their academic teachers or their patients?. Med Educ. 1998, 32: 419-421. 10.1046/j.1365-2923.1998.00223.x.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The study was undertaken as part of a project to develop an assessment process for overseas trained osteopaths to practice in Australia through a grant provided by the Osteopaths Registration Board of Victoria (superseded by the Osteopathy Board of Australia as of July 1, 2010). The authors would like to thank the organisations and the people who generously gave their time to be interviewed as part of the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brett Vaughan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors were involved in the design of the study. VS and BV undertook the literature review. VS lead the focus groups and interviews. MW assisted with the focus groups and interviews. RG, CG, VS and BV analysed the data. GF and PMcL developed the discussion. All authors contributed to the compilation and review of the manuscript. All authors read and approved the final manuscript.

Brett Vaughan, Vivienne Sullivan, Cameron Gosling, Patrick McLaughlin, Gary Fryer, Margaret Wolff and Roger Gabb contributed equally to this work.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Vaughan, B., Sullivan, V., Gosling, C. et al. Assessing fitness-to-practice of overseas-trained health practitioners by Australian registration & accreditation bodies. BMC Med Educ 12, 91 (2012). https://doi.org/10.1186/1472-6920-12-91

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6920-12-91

Keywords