ACCREDITATION AND PROGRAMME EVALUATION
Ensuring the quality of educational programmes
Strategies to evaluate the curriculum and to provide continuous information to guide decisions are now available. How schools are accredited varies in different regions of the world.
Two related processes are essential to assure the ongoing quality of a medical education programme: accreditation and programme evaluation. Accreditation is an external review system that uses pre-established regionally accepted standards of educational quality. The focus of accreditation may be an institution, such as a college or university, or an educational programme leading to a specific degree, such as the MD (medical doctor). Accreditation tends to be episodic, occurring at specified intervals and reviewing institutions or programmes as of a given time. Programme evaluation, as defined here, is an ongoing process of review that occurs internal to the medical school using school-developed process and outcome criteria. Programme evaluation, which can be both formative as well as summative, allows the identification of strengths and areas needing improvement. Thus, programme evaluation and accreditation are two tools for quality and improvement that are complementary and may utilise similar criteria. The systems ideally are designed to provide information on both the processes and outcomes of a medical education programme, although along different time lines, with one using internal reviewers and the other using external, peer, reviewers. The results of programme evaluation inform accreditation as well as supporting ongoing quality improvement. This chapter will describe the basics of accreditation and programme evaluation systems. The reader is referred to Chapter 17 for best practices in assessment of students, which supports both programme evaluation and accreditation and, therefore, is the third side of the triangle to sustain quality medical education programmes.
Definition, purpose and need for accreditation
The Council for Higher Education Accreditation in the USA defines accreditation as both a process and a status, a means to ensure and improve higher-education quality, assisting institutions and programmes using a set of standards developed by peers. The result of the process, if successful, is the awarding of accredited status. All accrediting organisations create and use specific standards to ensure that institutions and programmes meet at least threshold expectations of quality and that they improve over time. These standards address key areas such as faculty, curricula and student learning outcomes, student support services, finances and facilities. All accrediting organisations use common practices, including a self-study by the institution or programme against the standards, an on-site visit by an evaluation team of peer experts and a subsequent review and decision by the accrediting body about accreditation status. Depending on the accrediting body, this review is typically repeated every 3–10 years for the institution or programme to sustain its accreditation (Council for Higher Education Accreditation 2010: 1).
The World Federation for Medical Education (WFME) recommends that, for the accreditation of medical education, there should be four specific stages:
1 institutional self-evaluation of the medical school;
2 external evaluation based on the report of self-evaluation and a site visit;
3 final report by the review team containing recommendations regarding the decision on accreditation;
4 the decision on accreditation (World Federation for Medical Education 2005).
Programme evaluation: five case studies
Accreditation as a tool for continuous quality improvement
Accreditation can serve as a framework for comprehensive internal programme evaluation and as a tool for continuous quality improvement. The first case study below, from the University of California, Davis (UCD), describes how one medical school in the USA used accreditation standards related to student mistreatment to improve its clinical learning environment and by extension the general culture of the institution.
Case study 24.1 Accreditation standards as a tool to drive organisational culture change, The University of California, Davis, United States
UCD is a public school of medicine in Northern California, USA, founded in 1966 with a mission to train the next generation of physicians to serve California. Over the past decade, the school has experienced dramatic growth in the size and scope of both its clinical operations and its research enterprise. The transition of this school from a small, local, training programme to a major academic health system providing the full breadth of academic experiences engendered exciting and sometimes stressful changes. Our experience highlights the key role that the Liaison Committee on Medical Education (LCME) accreditation standards can play in achieving a high-quality training environment and supporting, and even driving, important organisational culture change.
After an LCME accreditation visit in 2006 identified multiple areas of non-compliance with standards, the faculty undertook a series of quality improvement initiatives in collaboration with the recently appointed Dean. Compliance with LCME standards necessitated acceleration of curriculum reform, improved career counselling and updating physical plant facilities, as well as ongoing attention to the learning environment and prevention of student mistreatment. While many faculty and school administrators had been working on these areas, the results of the LCME review focused attention on the urgency to address these critical issues and served to catalyse real change. Delineation of optimal approaches was facilitated by a consultation from the LCME Secretariat later in 2006, and by the requirement for ongoing monitoring, including status reports and a follow-up limited site survey. As a result, significant changes were made in administrative leadership, curriculum integration, centralised oversight of education programmes, timely reporting of grades, student wellness and career advising. Policies and education programmes to reduce student mistreatment were strengthened and highlighted throughout the organisation. By 2007, the LCME deemed that substantial improvements had been made in compliance with the standards.
In 2010, however, LCME highlighted renewed concern about non-compliance with standards focusing on student mistreatment and the learning environment. Rates of mistreatment reported by UCD fourth-year medical students in the annual Association of American Medical Colleges (AAMC) graduation questionnaire were much higher than the national average. These results were obviously alarming to the faculty and administrative leadership, and discordant with the historical perception of the school as a supportive and friendly institution.
Clearly, the LCME standards were shining light on aspects of the school’s organisational culture which needed to be addressed explicitly. Prompted by the 2010 findings, the school undertook a comprehensive review of compliance with all of the LCME accreditation standards, assigning letter grades for each standard and creating implementation teams to develop recommendations for any standard deemed by the teams to have a ‘non-passing’ grade. For example, for standards regarding student mistreatment and the learning environment, the Dean appointed a Learning Environment Task Force with representation from faculty, administration, nursing, residents, students and staff to understand the student mistreatment problem and create an action plan to improve compliance and, most importantly, the student experience. Based on the positive experience with the LCME Secretary’s consultation in 2006, the Dean requested another LCME consultation visit specifically to solicit advice on best practices in creating a positive learning environment. In addition, the status reports and action plans required by the LCME to monitor progress served as useful prompts to keep planned corrective actions moving forward.
The school identified several factors contributing to the student mistreatment problem:
• The school had not recognised or addressed its evolution from a smaller programme with a ‘family’ atmosphere into a larger, and often less personal, academic enterprise.
• Students were not adequately prepared for the more autonomous and self-directed learning environment of the third year which was focused in very fast-paced clinical settings, having been conditioned to a more nurturing pre-clinical experience in the first 2 years.
• Increasingly busy and stressed clinical care providers often viewed students as peripheral to the clinical teams.
• Students were insufficiently involved with the school’s early efforts to address mistreatment, being identified as victims and not the source of important input into solutions.
Indeed, students were seen as the ‘canary in the coal mine’ in identifying environmental problems experienced by other members of the clinical teams, from overworked trainees to stressed staff, to clinical faculty trying to juggle multiple mission expectations. In fact, student mistreatment was considered to reflect cultural issues which were also likely to be linked to patient satisfaction with their care experience. Consideration of the LCME standards focused attention on the importance of respect and inclusion at all levels of the organisation.
An action plan to achieve excellence in the learning environment for students and to ensure that the organisation’s humanistic values, as articulated in the school’s Principles of Community (University of California, Davis 2014), were reflected in clinical settings, included:
• revision of the mistreatment policy to improve transparency and enforcement;
• creation of a Learning Climate Committee with empowered medical student representation to monitor, develop and implement interventions, such as required educational sessions on the learning environment for clinical departments, and to measure outcomes;
• tracking the learning environment, including identifying both positive and negative experiences through anonymous student surveys every 8 weeks in the third and fourth years, and sharing anonymised results across the organisation as a form of appreciative inquiry and emergent design to promote culture change;
• development of a comprehensive professionalism curriculum for students in order to address the ‘hidden curriculum’;
• creation of new professionalism awards for residents and faculty to promote a positive learning environment and funding for a Clinical Educator’s Masters programme to reward excellent faculty teaching in clinical settings.
To date, the impact of these interventions has been positive, with rates of student mistreatment declining to the national average, as measured in the 2012 AAMC graduation questionnaire, and falling further in 2013, as measured by the school’s recent student survey.
The LCME accreditation process and the requirements for compliance with LCME standards facilitated identification of needed changes at the UCD School of Medicine and galvanised institutional efforts to implement quality improvements. As one example, the requirement to provide a positive learning environment focused attention on this key issue, necessitated that we overcome denial and minimisation and lent urgency to the implementation of myriad and far-reaching interventions to drive fundamental cultural change. The consensus agreement about the importance of accreditation helped overcome resistance to change and accelerate implementation and acceptance of new pedagogy, administrative policies and investment in education faculty, programmes and facilities. LCME standards are an effective tool to catalyse organisational culture change in schools of medicine and academic health centres.
Accreditation as a tool for change and guide for new school development
In addition to quality improvement, accreditation can serve as an agent of social change. New peer-developed standards can be designed to move educational programmes into areas, such as diversity and social accountability, that they might not otherwise pursue. A third value added through accreditation is its regulatory function in that it establishes minimum levels of resources and systems that must be in place for a specific certification of degree programmes. The prospective specification and review of resources and processes serve to protect students as they invest their time and money; aid institutions in planning; and provide the stimulus for the acquisition of the needed resources, at the same time that protecting the public from poorly trained healthcare personnel. The Northern Ontario School of Medicine case study describes how Canada’s first new medical school in 30 years used accreditation standards to guide it in securing the resources and establishing the systems needed to admit its first class of students.
Case study 24.2 Using medical education accreditation standards as the foundation for creating Canada’s first new medical school in 30 years, Northern Ontario School of Medicine, Canada
The Northern Ontario School of Medicine (NOSM) was established in 2003 with an explicit social accountability mandate to:
address the healthcare needs of underserved populations of First Nations, Francophone, and Anglophone peoples across the province. The school embarked on an innovative distributed community-engaged education model . . . to address the shortage of healthcare providers in rural and remote communities, especially across Northern Ontario.
(Strasser et al. 2009: 1459)
Creating a new medical school is a complex task requiring attention to governance, admissions, curriculum, faculty appointments, faculty development and finance. The LCME accreditation standards for medical schools in the USA and Canada provided the framework for the creation of these programme initiatives.
The LCME standards are not prescriptive and, while they do not provide specific details on ‘how’ a particular issue needs to be addressed, they do provide the guidance on what needs to be in place to ensure a quality medical education programme. The structures and processes required to meet the standards are left to the faculty.
The development of the school’s governance, financial, and faculty appointment and tenure processes were all guided by the LCME accreditation standards.
While the curriculum design and the admission system at NOSM were based on medical education research, the actual development and implementation processes were all guided by the LCME standards. The medical education research supported three principles that were used as the foundation of the NOSM curriculum and admission systems. These principles pertained to a programme’s success in meeting its goal of producing graduates who ultimately decide to practise in smaller rural and remote communities. These principles are:
1 an admissions system minimising the barrier(s) to applicants from smaller communities;
2 a curriculum designed to provide students with the capabilities needed to practise in smaller communities;
3 a curriculum design that placed students in context – i.e. smaller communities to be taught and mentored by physicians who themselves had chosen to practise in small communities.
The admission standards and criteria, although guided by the LCME accreditation standards, were set by the faculty. They stipulated that the decision to admit a student must be made by the school’s admission committee so that no outside influence or favouritism can be exercised. The accreditation standards do not prescribe what those criteria must be. At NOSM, the faculty chose not to use the Medical College Admission Test (MCAT) commonly used by schools in Canada and the USA because of concern that it could be perceived as a barrier for students from smaller communities and/or from different cultural or language backgrounds. The faculty chose to use the combination of objective structured mini-interviews and the college grade point average as the key criteria for decision making. Pre-med requirements were minimised based on the premise that courses like organic chemistry are not essential for the practice of medicine, and worse, might be a barrier for students who had their high school education in smaller communities without the rich science options of a large urban high school. In addition, the faculty decided to give extra weight to students who grew up in smaller communities, given that students from smaller communities are more likely to return to similar communities.
The LCME again served as a benchmark for development of learning opportunities outside the urban, specialty-based environment where students could learn from and be mentored by the non-specialty physicians in non-urban, smaller communities. This allowed the school to design a curriculum that placed students in remote First Nation communities and Reserves for a month while continuing the first-year curriculum. Many of these communities were not accessible by road and were as small as 350 people. In the second year, students spent 2 months distributed across Northern Ontario in communities of approximately 5,000 people. By developing specific objectives as the basis for student assessment and tracking systems, the faculty were able to document that, regardless of location, students had comparable educational experiences with comparable assessment results.
While requiring active learning, varied forms of student assessment and narrative feedback, the LCME standards do not prescribe specific content from any particular discipline, nor do they prescribe how teaching should be conducted. This allowed the faculty to develop a case-based curriculum with a minimal number of lectures (generally 3 hours a week) that emphasised active learning in small-group settings. In this way, students were able to refine their problem-solving skills while learning clinical skills from the physicians in the smaller communities. The need for large departments of basic scientists and clinical specialists was therefore minimised.
The LCME standards do not prescribe department or discipline-based clinical training. This allowed the medical educators at NOSM to design a clerkship programme that required all students to complete their clinical year (Year 3) in a longitudinal integrated clerkship model for 8 months. These clinical experiences were carried out in more than ten communities (each approximately 20,000 people) across Northern Ontario. In the fourth year, students trained with specialists in the two larger cities in Northern Ontario. Thus, the LCME standards support the development of a community-engaged education continuum ranging from First Nations and Reserve communities in Year 1, small rural communities in Year 2 and small urban communities in Year 3.
The early results are in. At the time of writing, four classes have graduated since the school’s first class entered in 2005. Sixty-one per cent of all MD graduates have chosen family practice (primarily rural) and 65 per cent of these are practising in Northern Ontario (Strasser et al. 2013).
National accreditation systems evolve and improve over time
In the context of medical school expansion and desire for the assurance of quality educational programmes, more and more countries around the world are developing their own accreditation systems for medical education programmes. The case study from Taiwan below illustrates how this country’s medical education leadership revised its accreditation standards to fit the country’s needs better.
Case study 24.3 Overhauling the accreditation standards of the Taiwan Medical Accreditation Council
The Taiwan Medical Accreditation Council (TMAC) was established in 1999 and began accreditation of medical schools in 2001. However, over time, TMAC found that the narrative format of the original TMAC accreditation standards did not lend itself to ensuring consistent decisions about schools. In addition, it made it difficult for medical schools to understand easily the areas that needed improvement. At the same time, TMAC had been very impressed by the reformatting of the LCME standards in 2002, which enhanced the clarity and meaning of the standards and facilitated the ability to recognise chronic noncompliance.
The TMAC consulted with the LCME in 2008, expressing an interest in overhauling its accreditation standards. Five specific steps were taken:
1 In July 2009, two secretaries and an assistant secretary of the LCME and the President of the Foundation for Advancement of International Medical Education and Research (FAIMER) travelled to Taiwan to conduct a 3-day workshop on global medical education and the development of the accreditation system in the USA. The issues discussed included the content and revision of accreditation standards, the selection and training of surveyors, the accreditation process and determination of the accreditation results. The conference was attended by TMAC members and experienced surveyors and ample time for discussion was provided. The visitors formulated a set of proposals which included 39 essential standards considered to be important and relevant to medical education in Taiwan.
2 In March 2010, the ex-chair of the LCME Subcommittee on Standards (SOS) spent 2 weeks in Taiwan to help the TMAC-SOS overhaul its accreditation standards. Members of TMAC-SOS, consisting of five TMAC members and two external members with a medical education background, were first introduced to the ways in which the LCME standards had been reformatted from prose to numbered standards that allowed the use of explanatory annotations. The differences in culture across the two countries were taken into account and the existing TMAC standards were reformulated into clearer direct statements.
3 The first draft of the new standards was translated from English into Chinese and disseminated to all medical schools in Taiwan during the following 2 years. The TMAC then held several meetings with school representatives for the purpose of further revision and clarification. It also explained the process and introduced the new standards at the biannual Deans’ Conference on two occasions.
4 In 2012, five medical schools in Taiwan were due for ‘full review’ in the second cycle of the TMAC accreditations, and TMAC members of the TMAC-SOS each participated in the visit of a particular school to perform a ‘test run’ on the newly drafted standards.
5 In 2013, TMAC-SOS held several meetings to discuss findings from the ‘test run’ and the feedback from schools. It also took into consideration standards used by other international organisations. The new TMAC-SOS standards were presented to the Deans’ Conference in September 2013, and were ready for implementation in 2014.
With the assistance of the LCME, the accreditation standards of the TMAC have been extensively overhauled, and members of the Council are currently revising the TMAC Institutional Self-Analysis. The Council has especially come to appreciate the following areas of relevance in the process:
1 The value of learning from others’ experiences, particularly the process by which LCME established the American accreditation system and carried out the development, application and revision of standards. The TMAC also learned through the participation of its members as international observers in two LCME accreditation visits in 2009 and, since then, has maintained regular updates by attending the LCME sessions at the annual AAMC meeting.
2 The impact on accreditation standards in diverse countries of differences in the structure and history of the medical education system and of differences in language and culture.
3 The process of overhauling the accreditation standards, which includes maximising communication between accreditation organisations, the deans and other representatives of medical schools, and the value of carrying out a ‘test run’.
4 The importance of involving key persons in the process, including surveyors and schools, to increase the precision of the standards, and to help schools identify areas for improvement as well as achieve the standardisation of medical education at a national level.
Institutional versus programme accreditation
When examining a country’s use of accreditation, it is important to distinguish between systems that accredit institutions and those that accredit educational programmes. Almost all countries have some form of institutional accreditation, whereas a separate and independent system to accredit medical education programmes is less common. Institutional accreditation provides a broad level of oversight across many different levels of learners and many different types of degrees or certifications. Institutional accreditation systems can often include programmes ranging from early primary school to advanced degrees in medicine, law and nursing. Such broad coverage of diverse educational offerings means that the standards for institutional accreditation cannot include the specificity and granularity needed for advanced and technical degree programmes such as medicine, nursing and other healthcare education programmes. Thus, more and more countries around the world are developing specialised programme accreditation for the healthcare fields. This is happening at the same time there has been a dramatic increase in the number of new medical schools emerging around the world. It becomes important to create systems of oversight that will ensure that all schools and their graduates meet quality standards. In simplistic terms, institutional accreditation is necessary but not sufficient for the complexities of assuring quality of medical education programmes. The description below of the relatively new medical education accreditation system for South Korea is a good example of a country taking on this challenge to provide this important level of quality assurance.
Case study 24.4 Developing an accreditation system from South Korea
In 1996, the Ministry of Education in Korea conducted institutional evaluation of all medical schools and published their ranking. The medical professional community criticised the validity of this evaluation as it was performed by non-medical faculty members, and further rejected the idea of creating a hierarchy among schools as it only assessed relative excellence. The institutional evaluation was not well suited for medical education and did not mitigate the problems resulting from the rapid expansion of medical schools (six schools with 805 students in 1950, increased to 41 with 3,072 students in 1997).
This was the impetus for the research and development of a national programme evaluation system for quality control of basic medical education. In 1997 the Accreditation Board for Medical Education in Korea (ABMEK) was established to do this; and in 1999, ten newly established medical schools were evaluated against the standards developed by this group.
After this successful pilot study, ABMEK expanded and became the Korean Institute of Medical Education and Evaluation (KIMEE) and officially launched its first accreditation in 2003. All 41 medical schools in the country voluntarily agreed to participate in the first cycle, which started in 2000 and ended in 2005. One medical school refused to participate after two consecutive follow-up visits due to poor evaluation results. It was only in 2011 that KIMEE, with the help of the National Assembly, passed a new health act preventing graduates of non-accredited medical schools from sitting for the national licensure examination.
Initially standards were developed by benchmarking other international organisations. The main objective of the first accreditation was to obtain full voluntary support from all 41 medical schools, and therefore special attention had to be paid to avoid burdening schools with excessive preparatory activities while still pushing newly established medical schools to meet the minimum standards. The first standards were related to resources, entry-level student characteristics and other input-driven issues, and were relatively easy for the established and long-standing medical schools. Over time, the standards became more stringent, increasing from approximately 50 to the current 97 items.
After the KIMEE completed its first accreditation in 2003, the overall satisfaction rate with KIMEE was a little over 70 per cent, and the most recent survey in 2011 showed it to be currently over 90 per cent. All medical schools, with one exception, acknowledged the necessity of a quality assurance system and supported the current accreditation programme. Annually, all medical schools pay fixed annual dues to the KIMEE for its accreditation work. In addition, the Korean Medical Association (KMA) subsidises approximately half of the remaining budget to the KIMEE, with the Korean Hospital Association (KHA) sponsoring the rest.
Accreditation in itself is a relatively new concept for East Asian countries. The South Korean government is still very much in a state of confusion in this area, and the Ministry of Education attempts to exert control over this voluntary programme through an official recognition process create tension between the Ministry of Education and the professional accrediting agencies. However, overall, the quality assurance systems that have been put in place show continued improvement and the consistent exchange with other international agencies (e.g. LCME, WFME, UK General Medical Council (GMC), Australian Medical Council (AMC) and the Royal College of Physicians and Surgeons of Canada (RCPSC)) continues to bear positive synergistic effects.
International recognition of accreditation
To stimulate further the global movement for accreditation, on 21 September 2011 the Educational Commission for Foreign Medical Graduates (ECFMG), in the USA, announced that, beginning in 2023, physicians receiving medical degrees outside of the USA and Canada who apply for residency or fellowship training in the USA must graduate from a school,
that has been appropriately accredited. To satisfy this requirement, the physician’s medical school must be accredited through a formal process that uses criteria comparable to those established for US and Canadian medical schools by the Liaison Committee on Medical Education (LCME) or that use other globally accepted criteria such as those put forth by the WFME.
(ECFMG 2010: 1)
The reasoning for this requirement not being scheduled to take effect until 2023 is that there is no existing universally accepted accreditation process for medical schools. The ECFMG stated the hope that this requirement would stimulate development of such a system. The WFME philosophy, while supportive of the ECFMG decision and partnering with this effort, has a different perspective on whether there should be a universally accepted system for accreditation. Instead, WFME asserts that each region or country should develop its own accreditation system. To facilitate this approach, the WFME offers examples of accreditation standards and supports a process to recognise accrediting bodies. This recognition process allows countries or regions to have their accreditation systems reviewed through a peer review process. Schools that are accredited by a WFME-recognised accreditor satisfy the ECFMG requirement. The recognition process requires the submission of documents describing policies, standards and governance. A team of three international medical educators then observes a team conducting a survey visit, and this same team from the WFME then attends the accreditation meeting where that school is discussed. This recognition process was first applied to the Caribbean Accreditation Authority for Education in Medicine and other Health Professions (CAAM-HP), which received WFME recognition status on 10 May 2012. The Association for Evaluation and Accreditation of Medical Education Programs, Turkey (TEPDAD) received its recognition on 31 July 2013. The LCME was similarly reviewed and received its recognition on 28 January 2014.
Ensuring that new medical schools meet quality standards
One of the urgent issues that countries must consider when deciding whether to adopt a programme-level accreditation for their medical schools is the dramatic and relatively unregulated growth of new medical schools around the world. It should come as no surprise that many of these new schools are emerging in countries without medical education programme accreditation systems in place. The countries most vulnerable for unregulated growth and, therefore, students and the public most at risk, are those that have neither programme accreditation nor national licensing examinations. Brazil is one such country that is without programme accreditation or a national examination system and has seen the number of medical schools increase from a little over 120 in 2005 to over 180 by 2007, with that number holding steady as of 2013. Many or most of these new schools are private, for-profit entities, and the types of resources available for the students and the quality of the programmes themselves are unknown. The number of graduates of these new schools has exceeded this country’s ability to provide postgraduate training and, since postgraduate training is not a requirement, many of these graduates will simply go directly into practice. With the absence of an accreditation system, no one knows the quality or the resources of these new programmes and, with no national licensing examination, the knowledge and skills levels of the individual graduates are also unknown. For example, there are many key questions that cannot be answered:
• How much direct patient care experience do these graduates have?
• How much of the sciences basic to medicine have they learned, and to what extent do they know how to apply this knowledge in the care of patients?
• To what extent were there opportunities for active learning to ensure that these graduates will know how to be, and be willing to be, lifelong learners?
There are many challenges for a country that seeks to set up programme-level accreditation for the first time. As difficult as it is in deciding what standards to use, how to create the supporting documents or train team members, the hardest part is establishing the governance system that grants authority to a peer group. This is most difficult in countries that have strong central government control and don’t have as much experience in delegating authority to non-governmental groups. Then the issues of how the accreditation system is funded and how you manage the potential conflicts of interest with the funding source are both barriers to overcome. Indonesia is a country that is in the process of establishing a new programme accreditation system for several healthcare disciplines while at the same time creating a national examination for all medical school graduates.
Case study 24.5 Establishing a quality assurance system of medical education in Indonesia
Concern about the quality of Indonesia’s medical doctors has been growing in the country for the last two decades. Frequent reports of malpractice and cases of neglect have triggered public debate, and at the same time raised people’s awareness and expectations about the quality of available care. Increasingly, individuals who can afford it prefer to go overseas for their healthcare needs. Meanwhile, the number of medical schools has expanded by 80 per cent during the last 10 years, from 40 schools in 2003 to 72 schools in 2013; and of those 72, about 60 per cent are private medical schools. The tuition fees, particularly in private medical schools, have more than tripled during the same period of time. Concerned about the quality and increasing costs to students, the Directorate General of Higher Education suspended the licensing of new medical schools in 2011 until a more effective regulatory framework for medical education is in place. The urgency on the part of the government for this regulatory and quality assurance system is partly driven by the increased globalisation of the labour market under the Association of South East Asian Nations (ASEAN) and the Asia Free Trade Area (AFTA).
Support for medical education reform in Indonesia began in 2005 with the introduction of a competency-based curriculum that included a student-oriented approach that utilised problem-based learning, the integration of disciplines, community orientation and early exposure to clinical settings. The period of study was shortened by 1 year to accelerate production in response to the need for more doctors. A 12-month internship was added in 2012 that provided enhanced clinical supervision. The Indonesia Medical Council, as part of the reform, facilitated the development and publishing in 2006 of standards of competencies for medical doctors and standards of basic medical education. This effort was the foundation for subsequent accreditation work as it effectively engaged multiple stakeholders. The stakeholders involved in developing and adopting these documents and standards were the Indonesia Medical Association and the Association of Indonesia Medical Association Institutions, the College of Indonesian Doctors, the Ministry of Health and the Ministry of National Education. The Indonesia Medical Council also commissioned a consortium to develop the National Competency-Based Examination at the same time that the accreditation system was being developed. This national examination was administered for the first time in June 2007.
These efforts continued under the Health Professional Education Quality (HPEQ) programme launched in 2010 that involved both government and non-government stakeholders through task forces specially formed to support the reform process. The main reform agenda included strengthening the methodology of the National Examination, and establishing a specific accreditation system for health professional education, including medical education. The reform included the establishment of two independent bodies: an independent accreditation agency for health professional education (Lembaga Akreditasi Mandiri Perguruan Tinggi Kesehatan: LAM-PTKes), and an independent agency for developing the methodology and quality assurance system of the national examination (Lembaga Pengembangan Uji Kompetensi: LPUK).
Like many other countries, Indonesia has for the past 20 years had an agency that provided institutional accreditation, but this covers some 3,100 higher-education institutions and 16,200 study programmes in Indonesia. The standards for this institutional accreditation are far too broad to provide detailed focus on healthcare programmes, and that is why this next level of accreditation for medical schools and other health professional schools has been instituted. Even before the launch of the HPEQ programme in 2010, the Indonesia Medical Council engaged the institutional accreditor with a memorandum of understanding to collaborate in strengthening the accreditation of medical schools. This memorandum allowed the development of specific accreditation instruments for assessing medical education. The HPEQ played a strategic role in accelerating the establishment of an accreditation system specific to seven disciplines: medicine, dentistry, nursing, midwifery, pharmacy, nutrition and public health. Consultations were carried out with international and national experts and teams of Indonesians were sent overseas to learn from different more established accrediting organisations. The design of the system called for accreditation by an independent accreditation body established specifically for accrediting health professional education (LAM-PTKes). The inclusion of the seven disciplines opened up the opportunity for interprofessional communication, although accreditation of all seven disciplines by a single accreditation agency is challenging. The system is formative, aiming for continuous quality improvement, with financing independent from the government.
The process of establishing the accreditation system for Indonesia continues. While the blueprint of the accreditation system, the accreditation instruments and the accreditation business process are ready, ensuring synergy with existing government regulation and building consensus through broad-based participation of all stakeholders, including government and non-government entities, have required more time than expected. After lengthy debate and arduous legal process, the Minister of Education and Culture released a decree in October 2014 recognising LAM-PTKes as an independent accreditation body for health professional education.
Several lessons from the Indonesia medical education reform process are:
• Existing government regulations are often not ready to support a professional community-led, peer review, quality assurance system. Transferring the authority from the government to non-government entities is difficult.
• Strong collaboration between government and non-government entities is essential in conducting the reform process.
• Building the capacity of professional associations is an important investment in developing the capacity for medical education quality assurance.
The efforts being made by Indonesia to institute a country-wide accreditation system are notable for the decision to move forward to address the relatively unregulated growth of new medical schools. Also important to draw from this case study is the essential but challenging need for consensus among the many stakeholders to create a decision-making body that can function independently to peer review programmes according to the standards. For countries that are used to centralised government appointees having the authority for decision making, it takes time and consensus building to establish the trust of independent decision makers that are comprised of practitioners and academics.
External evaluation: accreditation
Governance, financing and authority
There are different types of governance models for programme accreditation around the world. When evaluating the pros and cons of different models, the basic questions are: ‘Who is underwriting or funding the programme?’ and ‘How independent are the accreditation decisions from the influences of this funding?’ Many accreditation systems are fee-based, independent, free-standing entities. Often, a fixed annual fee is levied by the accrediting agency and/or individual fees for specific services are charged. This free-standing governance model raises a potential conflict of interest when the accreditation agency that charges fees for a service is the agency that prescribes the service itself. This ethical dilemma is not unlike the role of the physician who both treats the patient and prescribes a service (procedure or revisit) that financially benefits the prescriber. It is an issue that must be acknowledged and openly discussed.
Other governance models entail partnerships with professional associations and/or government agencies. These sponsoring partners often play a role in the financing of the accrediting agency, which lessens the cost burden to the schools. In this case, care must be taken to ensure that potential conflicts of interest do not arise, and that the decisions made by the accreditation committee are free of any interference from these stakeholders. One model of managing the potential conflict is that the group that makes the accreditation decisions is left to make and carry out those actions independently. The sponsoring associations or governmental agencies that are participating in the funding are blinded to these decisions but they can participate in policy deliberations that expand the scope of the work or have major budgetary implications.
Standards are the lens through which accreditation views the world
Standards are the statements that unify the examination system of accreditation. An important first step in developing a regional or national medical education programme accreditation system is the creation of a set of national standards. Ideally, this process includes a variety of stakeholder groups. There are many good examples of standards for medical education programmes, including standards developed by the WFME, the GMC for the UK, the AMC for Australia and New Zealand and the LCME for the USA/Canada. However, it is important to emphasise that, while existing standards can be good models for a specific region or country, it would be a mistake simply to adopt a set of pre-existing standards. Failing to go through the process of creating country-specific standards would miss the important process of gathering medical school faculty, practising physicians, students and representatives of the public to debate what is best for that specific country. The example described earlier related to Taiwan’s reformulation of its existing standards is an excellent model for others to follow. This process is more likely to lead to standards that are more widely accepted and understood, which is vital to the implementation of any accreditation system. In the end, the standards throughout the world may look far more similar than different; however, the process of reaching consensus for a given country or region is crucial.
The standards inform the documentation that needs to be collected by the survey team and included in the survey report. They also create the framework that facilitates objective decision making about accreditation status and consistent review across programmes. A brief review of how the WFME standards are formatted can assist in understanding the layers of meaning that are inherent in well-written accreditation standards.
International standards
The WFME standards were first published in 2003, modelled in part on the LCME standards, and were revised in 2012 (WFME 2012). The standards are structured according to the following nine areas: mission and outcomes, educational programme, assessment of students, students, academic staff/faculty, educational resources, programme evaluation, governance and administration, and continuous renewal.
The nine areas are defined as ‘broad components in the structure, process, and outcome of medical education’ (WFME 2012). Under these nine areas of standards are a total of 36 more specific sub-areas which correspond to performance indicators. In the case of the WFME, the standards are specified for each sub-area using two levels of attainment:
1 a basic standard is an area that must be met by every medical school;
2 a standard for quality development indicates a level that would be expected as a best practice for a medical school.
Annotations supplement the standards by providing examples or clarification of language of the standards. Listed on the WFME website (World Federation for Medical Education 2014) are 100 basic standards, 91 quality development standards and 121 annotations.
As an example, consider the WFME basic standard related to educational outcomes (1.4):
The medical school must define the intended educational outcomes that students should exhibit upon graduation in relation to:
• their achievements at a basic level regarding knowledge, skills, and attitudes (B 1.4.1);
• appropriate foundation for future career in any branch of medicine (B 1.4.2);
• their future roles in the health sector (B 1.4.3);
• their subsequent postgraduate training (B 1.4.4);
• commitment to and skills in lifelong learning (B 1.4.5);
• the health needs of the community, the needs of the healthcare system; and
• other aspects of social accountability (B 1.4.6).
(WFME 2012: 20)
The WFME quality development standard for that educational outcome area is:
The medical school should:
• specify and coordinate the linkage of outcomes to be acquired by graduation with that to be acquired in postgraduate training (Q 1.4.1);
• specify outcomes of student engagement in medical research (Q 1.4.2); and
• draw attention to global health related outcomes (Q 1.4.3).
(WFME 2012: 20)
These standards are ‘non-prescriptive’. This means that they do not specify exactly what or how something must be done, but instead they indicate the areas that must be addressed. It is up to the faculty of the medical school to determine how to ensure that students develop the skills of lifelong learning and the commitment to apply them. It is up to an accreditation survey team to determine if that is taking place appropriately at that school.
In contrast, ‘prescriptive’ standards are much more specific. As an example, the accreditation standards for postgraduate or residency training in the USA are prescriptive in nature. They indicate, based on how many new residents enter the programme each year, exactly how much time must be allocated for the programme director and how many assistant directors must be in place. There are pros and cons for both types of standards. Prescriptive standards are much easier to measure, verify and document. A disadvantage of prescriptive standards is that they do not allow variations based on unique missions of a school and they restrict mission-based innovation. Prescriptive standards tend to be more common when there are many (hundreds and hundreds) of programmes to be accredited. Non-prescriptive standards are more difficult to measure and require assessment by experts familiar with that type of programme. On the other hand, they allow faculty to adjust their educational programme and the resources to address specific needs and missions of the school. Non-prescriptive standards can be frustrating to a university president who wants to know how many faculty she will need to establish a medical school. The answer, with non-prescriptive standards, is that it depends on the mission of the school, the nature of the curriculum, whether it was based in the community or in a research institution, etc. However, what a university president sometimes understands from that response is ‘we can’t tell you, but we will know it when we see it.’ Non-prescriptive standards, while fostering creativity, require thorough guidelines for schools that are preparing for a review and careful training of the survey team and the committee in order to reduce subjectivity and support consistency in the decision-making process.
Survey teams and decision-making committees
The survey teams are primarily peer based and composed of different combinations of faculty from basic sciences, clinical sciences, medical school administration and, in a number of countries, students. The size of a team varies in different countries, with as many as ten in Taiwan to three or six in other countries. Teams should include ‘peers’ who are either physician practitioners or faculty involved in a medical school, but team characteristics vary based on the members’ areas of expertise and the types of settings in which they practise or the nature of the schools in which they teach. Teams should be created based on the type of school that is about to be surveyed. For example, an intense research-based medical school will often have at least one team member (but not all) from another research-intense medical school. In the construction of a team, careful attention must be paid to conflicts of interest, so as to manage real and perceived challenges to objectivity. In the USA, teams will occasionally have observers who are there from other countries to afford them an opportunity to experience an accreditation survey visit. One unique aspect of team composition in the USA and Canada is the role of the fellow. A fellow is an individual who has been nominated by the dean of a medical school that will undergo its own accreditation survey visit in the next 2 or 3 years. Fellows are nominated to a team because they are designated to manage their own school’s self-study in the future. This allows them the opportunity to see how to prepare for their own school’s accreditation visit.
Team training is essential and particularly important when the standards are non-prescriptive. Cases using school information (in an anonymous fashion) that highlight the complexities inherent in some standards are very effective in assisting team surveyors to understand their meaning and application. Webinars and online asynchronous modules greatly improve the ability to ensure that all team members are prepared for their responsibilities. Also essential is a process to gather evaluation information after each visit about the quality of each team member. Evaluations from the school that was visited can be helpful; however, more useful are evaluations completed by team members about other team members. Assessing the quality of team members’ knowledge of the standards, the quality of their preparation, the conduct during the survey visit and the quality of their written contributions are important evaluation benchmarks.
The level of detail in a survey team report varies among accreditation agencies. Some use reports that document only the areas that have been judged to be out of compliance. Others require teams to write more extensive reports providing evidence as to the status of all standards. The committee or council that will receive the report from the survey team and make the accreditation status decision is again comprised of peers, but in some countries, may also include students and members of the public who have no connection to medical education except as recipients of healthcare services. This committee is often staffed and managed by individuals either experienced in accreditation or familiar with medical education, or both.
The number of staff needed to manage an accreditation system depends on the number of schools in the country/region, the length of the accreditation term (and the corresponding number of survey visits per year), the information technology and other resources available for this activity and the nature of the standards. As an example, the LCME, which utilises an 8-year accreditation term, makes decisions for 141 medical schools in the USA and 17 medical schools in Canada, and is in the process of overseeing the development of 18 new medical schools, some of which are included in the total numbers of schools for both countries and some that are still in the very early stages of accreditation. To support this effort, the LCME has four full-time professional staff who, prior to joining the accreditation team, had extensive experience as faculty members and held various administrative roles in medical school dean offices. Supporting them are six administrative staff. The LCME meets three times annually and each meeting is approximately 3 days in length. The agenda for each meeting includes the reports from approximately six to seven full surveys, 30–40 status (follow-up) reports, and four to ten proposals for class size expansion and new branch campuses. Each of the 19 members of the LCME serves as a primary or secondary reviewer for a number of these reports. For each report, the reviewer makes a general recommendation for accreditation status and follow-up to the entire committee, with ample time for discussion. Having web-based documents with efficient reviewer reporting worksheets so that all members can preview the reviewer’s recommendations is essential in order to get this volume of decision making accomplished in the allotted time frame. This process, while supporting efficient decision making, has implications for information technology and staffing needs.
The decision-making committee reviews the team report and then comes to a decision related to the accreditation status of a given medical education programme. There are a number of choices that a committee may have when determining accreditation status. Withdrawal or denial of accreditation is always an option; however, it is generally exercised only in egregious situations and after all other remedial options have been exhausted. The safety of the public must be paramount at all times, and the welfare of the students at the institution being judged must also be considered. Removal of accreditation strands the students who are currently enrolled. When the situation is not serious enough to warrant withdrawal of accreditation, decisions are more focused on quality improvement. It is often said that an ideal outcome of an accreditation survey visit is that the medical school’s internal self-study preceding the team visit identified areas in need of improvement that are consistent with the survey team’s findings. This allows schools to begin to correct problem areas before the survey team arrives. For example, schools may have identified the need for renovations to certain facilities and have begun that work. However, at the time of the visit, the work is not complete. The LCME uses the term ‘in compliance requiring monitoring’ to indicate that a school, having identified the problem during its self-study, has already started but not yet completed the corrective action. This, and any areas that have been found to be non-compliant (i.e. not meeting one or more of the requirements of a standard), requires some form of follow-up. In most cases, when there are only a few areas needing follow-up or there is no immediate threat to the integrity of the medical education programme, the follow-up consists of a written report within 1–2 years with documentation that the issue has been resolved. In cases where the problems are more extensive or systemic, or if the situation threatens the quality of the medical education programme and student outcomes, another team may need to be sent to the school to provide a more thorough review of the corrections that have been put into place.
Confidentiality versus transparency
Just how transparent accreditation decisions should be is a frequent topic of conversation when directors of accreditation units gather for discussion. Those who argue for full disclosure of the team report and all decisions made by the accrediting committee make the point that the public – and especially the students – have the need and the right to know the strengths and weaknesses of a given educational programme. Some suggest that increased transparency would facilitate the quality improvement aspect of accreditation in that potentially more people would have access to problems that were identified, and thus, they could play a more effective role contributing to solutions.
Others argue that publicly publishing the survey report could hamper the directness that is used in describing problem areas and lead to less rigorous review. At the very least, all accreditation agencies should publish the accreditation status of each school and the date of its next scheduled visit. Accreditation agencies should also periodically publish reports that provide an overview of the actions taken, the standards that are most likely to give schools challenges and examples of best practices in meeting standards.
Accreditation of medical education programmes and other healthcare provider degrees is gaining more and more attention around the world. As movement of healthcare providers from one country to the next becomes commonplace, so grows the need to have systems in place to assure that the schools from which they graduate have provided their students with the basic medical knowledge, skills and attitudes. The WFME asserts that medical education accreditation should be a local phenomenon and provides both examples of standards and a recognition process for existing accrediting bodies. As described in the case studies from Taiwan and Korea, international consultation can be useful but ultimately, the standards and the procedures must be specific to that country or region. The Indonesian case study provides an example of an accreditation process in the middle of development and shows us that writing the standards and procedures is the easy part; the hard part is establishing the governance and the authority across the diverse schools within that country. Quality improvement and assurance are, at the end of the day, the most important aspects of accreditation. The case study from the USA shows us how standards and external peer review can lead to improvements in the learning environment, and the Canadian case study demonstrates how the standards can be used to create the blueprint for a new medical school.
Each school, in order to prepare for external peer review accreditation, must have in place its own systems that collect information to answer its own questions about how well the school’s missions, goals and learning objectives are being met. This is where systems for programme evaluation become paramount.
Programme evaluation to support accreditation and quality assurance
While accreditation is a process that occurs at specified intervals, internal programme evaluation should be ongoing. Medical schools should have systems and processes in place to collect and interpret quantitative and qualitative data about educational programme quality. Which data are collected, and the timing of data collection, should be based on a set of overarching questions that administrators and other stakeholders have identified about the educational processes and outcomes. For example, is the programme meeting its missions, delivering a quality educational programme, appropriately supporting its students and faculty, and mobilising its resource in the most efficient and effective ways? The questions that guide programme evaluation, and the data collected to answer them, may be derived, in part, from accreditation standards. However, medical schools should create their own internal set of process and outcome questions that specifically focus on institutional priority areas.
Medical school programme evaluation systems for external and internal purposes should be designed with the following elements in mind:
1 what and when to evaluate;
2 what methods can be used in the evaluation;
3 what resources are needed to implement the evaluation system;
4 what evaluation data support decision making;
5 how the school will ensure that the changes needed to support quality improvement are implemented.
Accreditors typically expect that schools evaluate both the outcomes and processes of the medical education programme and such a framework is useful for internal quality assurance purposes as well. Operationally, outcomes evaluation means determining the graduates’ success in achieving the objectives of the medical education programme which, in turn, should be derived from the mission and goals of the medical school. Making such a judgement requires schools to define the educational programme objectives in outcome-based terms and to identify the measures that will be used to determine students’ attainment of these objectives. Such outcomes typically include student knowledge, skills and professionalism, and could include mission-based outcomes such as specialty and practice choices and practice locations. Identifying how well the educational programme is working (i.e. the process dimension) includes evaluating student satisfaction with various areas, such as individual courses and clerkships and the curriculum as a whole, student support and counselling services, access to faculty, teaching facilities, and information and library resources. To identify if the educational programme is working well, faculty satisfaction may also be evaluated. The timing of such evaluations may vary. For example, schools typically evaluate performance in and satisfaction with courses and clerkships and with student services on an annual basis, while students’ attainment of the educational programme objectives need to be evaluated when a cohort completes the curriculum.
In new schools, and schools implementing a new curriculum, the approach to evaluation should be structured to provide incremental information that can be used for formative purposes. For example, the quality of first-year courses would be evaluated to see what, if any, changes are needed (for example, in the organisation or density of content), and that changes could be made immediately. Then evaluation could be done to determine satisfaction with and performance in the second-year courses and to decide whether the first year provided adequate preparation for the second year. This incremental approach allows ongoing evaluation of the segments of the curriculum in themselves and in their relationship to segments prior and succeeding. As a new curriculum reaches its conclusion, a summative evaluation of both process and outcomes can be conducted.
How to evaluate
To evaluate the extent of success in meeting educational programme objectives, schools typically use the results of a variety of internal assessments of student performance, such as written examinations in courses and clerkships; observations of hands-on clinical skills; observations of cognitive skills, such as clinical reasoning; and observations of professionalism. The assessment tools, including examinations, checklists and rating scales, should provide, as far as possible, valid and reliable information. The school should have considered how each of the assessments contributes to a summative judgement about whether the individual student and the student body as a whole meet the educational programme objectives. This means that, ideally, the content of the items in the assessment instruments should be linked to the relevant objectives of the medical education programme. In addition to internal assessments, some countries have national examinations that allow normative comparisons of the school’s students and graduates with a national cohort. For more information on assessment methods, consult Chapters 17, 18 and 19 in this volume.
Student satisfaction can be assessed through internal questionnaires to an entire cohort of students or to a selected group. There are several issues that schools should consider in utilising such measures. Schools should develop mechanisms to assure that response rates are high enough to provide a reliable evaluation. Strategies used include requiring students to complete the evaluation or providing incentives for students to comply. Providing information to students that the results of the evaluations are used to bring about change is also useful in motivating students to respond. The frequency and timing of evaluations are also important. Too frequent, too many, or too large questionnaires to a given group of students may decrease response rates, but too long an interval after a course has concluded can make the results less certain. To decrease the response burden on any given student, schools sometimes use a subset of students to evaluate a course, dividing the class randomly so that all courses are covered. Students typically desire confidentiality when evaluating courses or teachers, so any groups should be large enough or the process should be structured so an individual respondent cannot be identified. Finally, questionnaires often provide limited opportunities for students to explain their ratings. Focus groups can provide additional indepth information about areas that seem to require follow-up.
Resources to support programme evaluation
Accreditation standards often require that the medical school has sufficient resources to support the management and evaluation of a medical education programme. While such standards may specify only some needed resources, the internal quality assurance system should identify all the resources that the school believes to be important in educational programme planning, implementation and evaluation. Even in the absence of an accreditation mandate, a medical school should make resources available to support its internal quality assurance activities. There should be personnel, either located in a quality assurance office or as part of the medical school administration, who can develop and maintain the data collection system needed to determine if the educational process is working smoothly and educational programme outcomes are being met. This includes expertise in test development; questionnaire development; and other evaluation strategies, such as focus groups. In addition, information technology staff can support the online delivery of formative and summative assessments and course and clerkship evaluation questionnaires, as well as in ensuring data security. Information technology expertise is also needed in the implementation of a curriculum database that allows schools to identify where in the curriculum subjects related to the educational programme objectives are taught, and to determine the degree of content coverage related to each educational programme objective. The quality assurance staff should have appropriate administrative support and should have direct access to individuals and groups charged to make decisions about educational programme quality, such as the curriculum committee, and to relevant administrators.
Evaluation data that support decision making related to programme quality
Accreditation standards typically require that data about programme quality, including process and outcomes, be used for programme improvement. This should be the case for internal quality assurance activities as well. Decisions about quality should be made taking into account the following four categories of data.
Clear policies and processes
There should be clear policies and processes related to student assessment, advancement and graduation and these should be applied in a consistent way for individual students. This will result in data such as course passing and graduation rates, useful in the evaluation of such things as admission requirements.
Evaluating the curriculum as a whole
The committee responsible for the curriculum (i.e. curriculum committee) should periodically conduct a review of the outcomes of the curriculum. The review should be structured to answer several questions related to assuring that the curriculum is a coherent whole:
• To what extent does student performance indicate that the curriculum is achieving its objectives? How effective are the systems to address areas where performance is not optimal? What is the basis for any gaps in performance?
• To what extent is the content of the curriculum coordinated and integrated so that educational programme objectives are sufficiently addressed? To what extent are relevant content areas or skills taught sufficiently and appropriately sequenced across the curriculum?
Evaluating the individual segments of the curriculum (courses and clerkships)
Data should be collected about student performance in, and satisfaction with, individual courses and clerkships and as well as in curriculum segments. These data include the process as well as the outcomes of teaching. For example, it is useful to know to what extent the students think a course was well organised and addressed the stated objectives, as well as the extent to which teachers were prepared, available and organised. Students also can comment on the organisation of content and learning activities within a given year (horizontal integration) and across years (vertical integration).
Non-curricular outcomes
Based on their missions or the mandates of the country, province or state that regulates them, medical schools may have expected outcomes related to such things as the specialty choice or practice location of graduates. In order to determine whether these outcomes have been met, schools will need to collect data from students and graduates and/or from external agencies within the region. If the school finds that these outcomes are not being met, for example, the graduates are not choosing primary care practice or are not practising in the region, the school will need to determine the steps to take, and those that must be taken externally, to improve the outcome.
Decision making and programme improvement
Information about the process and outcomes of courses and curriculum segments should be reviewed by course directors, leaders of relevant departments and the committee responsible for the curriculum. A remediation plan for any identified deficiencies should be developed and there should be follow-up on the outcomes of the changes that are made. These should be reported as well, to ensure that the remediation plan has been implemented appropriately and is effective.
Decisions about the curriculum as a whole should include a determination of whether each of the educational programme objectives has been attained. This requires the curriculum committee and responsible administrators to collate the data from across the curriculum that relates to each objective. For example, for an objective related to professionalism there may be information from student performance during small groups in one year, from preceptor assessments in another year, from Objective Structured Clinical Examinations (OSCEs) and from physician evaluations during the clerkships. There should also be a review of the content associated with the professionalism taught across years so as to determine if there are gaps or redundancies and a review of student satisfaction with the teaching. All these bits of data are assembled to determine if the outcomes related to professionalism have been achieved and if the process to support professionalism teaching is working well.
To accomplish this comprehensive review of curriculum outcomes and process, the curriculum committee should have the logistical support to gather and analyse the appropriate data. Once a determination of success in meeting objectives has been made, a plan to address any gaps can be developed and implemented. For example, if the evaluation of the professionalism objective identifies that clinical ethics is not being taught, the curriculum committee should have the authority to mandate that this subject be added in an appropriate location.
Take-home messages
• Both accreditation and internal quality assurance require and depend on a robust process of ongoing programme evaluation.
• This requires planning to develop the system, resources to implement the system and appropriate authority delegated to the responsible individuals and groups to ensure that needed changes are made.
• As the WFME emphasises, medical education accreditation should be a local phenomenon. Countries establishing their own accreditation systems should draw from the experience of established programmes but they should write their own standards based on their regional healthcare issues, cultural factors and resources.
• As countries see medical education accreditation as a way to improve the healthcare workforce, attention to the potential conflicts of interest inherent in any funding scheme need to be managed carefully.
• Countries that write their own standards often initially create prescriptive standards such as expectations of square-foot formulas per student. As experience grows and training of site visitors improves, it is recommended that standards be developed that draw more upon peer judgement and focus on educational outcomes.
Bibliography
Council for Higher Education Accreditation (2010) The value of accreditation. Online. Available HTTP: http://cihe.neasc.org/downloads/ValueofAccreditationCHEA.pdf (accessed 20 March 2014).
Educational Commission for Foreign Medical Graduates (2010) EFMG to require medical school accreditation for international medical school graduates seeking certification beginning in 2023. Online. Available HTTP: http://www.ecfmg.org/forms/9212010.press.release.pdf (accessed 11 August 2014).
Strasser, R.P., Lanphear, J.H., McCready, W.G., Topps, M.H., Hunt, D.D. and Matte, M.C. (2009) ‘Canada’s new medical school: The Northern Ontario School of Medicine: Social accountability through distributed community engaged learning’, Academic Medicine, 84(10): 1459–64.
Strasser, R.P., Hogenbirk, J.C., Minore, B., Marsh, D.C., Berry, S., McCready, W.G. and Graves, L. (2013) ‘Transforming health professional education through social accountability: Canada’s Northern Ontario School of Medicine’, Medical Teacher, 35(6): 490–6.
University of California, Davis (2014) Principles of community. Online. Available HTTP: http://catalog.ucdavis.edu/community.html (accessed 23 September 2014).
World Federation for Medical Education (2005) Promotion of accreditation of basic medical education. Online. Available HTTP: www.wfme.org/accreditation/whowfme-policy (accessed 20 March 2014).
World Federation for Medical Education (2012) Basic medical education: WFME global standards for quality improvement: The 2012 revision. Online. Available HTTP: www.wfme.org/news/general-news/263-standards-for-basic-medical-education-the-2012-revision (accessed 20 March 2014).
World Federation for Medical Education (2014) Basic medical education. Online. Available HTTP: www.wfme.org/standards/bme (accessed 23 September 2014).