Reducing Prescribing Errors In Acute Care Settings Nursing Essay

Abstract

Prescribing errors affect up to 50% of admissions in NHS acute healthcare settings. Although most errors are detected before they have an impact on the patient, prescribing errors are still the third biggest cause of harm to patients in acute trusts. Recent research found the mean error rate in prescriptions to be 8.9%, with FY2 trainee doctors having the highest error rate at 10.3%.

Aim: The aim of this research was to discover current educational intervention practice to reduce prescribing errors by junior doctors in acute care settings. Method: An online questionnaire was sent to the Chief Pharmacist and the Director of Medical Education in 164 teaching and non-teaching NHS acute trusts in England. Responses were received from 85 trusts. Findings: Almost all trusts (83/85) offered some form of educational intervention to doctors. There was wide variation in the content, method of delivery and assessment of competency of prescribing across the trusts. The most common forms of educational intervention were face-to-face teaching and e-learning. Most trusts provided the educational intervention to trainee doctors (FY1, FY2) though some also provided it to higher grades. The majority of trusts delivered the training at the start of employment. Respondents rated training as moderately effective (mean 3.1/5.0).

Conclusion: This study has established that most trusts are actively engaged in education on this issue. Further research is needed to establish the most effective teaching method and educational content of training aimed at reducing prescribing errors. This should be combined with outcome monitoring and feedback mechanisms to evaluate the effect of educational interventions on error rates in prescribing in acute healthcare settings.

Introduction

Throughout the history of the National Health Service (NHS), errors (defined below) by medical staff that have consequences for the health of patients within acute care settings have been a recognised problem. The scale of the problem was highlighted in a report by the Chief Medical Officer which identified an estimated 60,000 to 225,000 patients per year who experience serious harm or fatality as a result of an intervention made by healthcare staff (1).

This research investigated a single type of medication error: prescribing error. A systematic review of 24 studies (2) by Ross et al found an incidence rate of 2-514 prescribing errors per 1000 items prescribed. This demonstrates that the occurrence of errors is a significant problem as up to 50% of items prescribed could contain a potentially harmful error. Although errors can occur at any stage of a patient’s journey through the health care system, most research focuses on the role that doctors play in these inaccuracies. Whilst it is recognised that errors can occur in any profession and at any level, junior doctors are at highest risk, being accountable for 91% of prescribing errors (2). Although 40% of junior doctors secure a placement rotation in general practice, the majority of their foundation training will be spent in a trust setting (3). This exploratory study researched the educational interventions to reduce prescribing errors currently in place in acute trust settings throughout England.

2.1 Impact of prescribing errors

The Equip study by Dornan et al identified 11,077 errors in 124,260 prescriptions in a survey of 19 trusts (4). The severity of the effect can depend on the type of prescribing error the patient is exposed to (5).The Equip report classified 2% of the errors as potentially lethal, 5% as potentially serious, 53% as potentially significant and 40% as minor (4) . The errors included prescribing drugs to which patients had a known allergy, dangerous combinations of medicines and incorrect dosages. Prescription errors are the source of the third highest cause of harm to trust patients after falling out of bed and slipping on floors (4). There is therefore an urgent necessity to reduce the frequency of errors in the interests of patient safety.

If a patient is subjected to an error, it not only has the potential to cause physical harm to the patient but other stakeholders are also affected. It can have a detrimental impact on other patients as the health professional’s attention will be distracted while trying to rectify the errors (6). In addition, a prescribing error, in common with any other medical error, may have a serious detrimental effect on the trust as a whole. Errors can jeopardise the confidence of the patient in the trust and its medical staff. There could be legal action, financial costs, loss of reputation and other esteem factors. Secondary consequences could include difficulty in attracting new staff, funding issues and loss of public support. Furthermore, an error can have a detrimental impact on the prescriber resulting in a loss of confidence, harm to reputation and possible career regression or in serious cases, prosecution (7).

It is therefore highly desirable from the perspective of all stakeholders in the NHS that effective steps are taken to minimise prescribing errors.

2.2 Causes of errors

There are many reasons why a junior doctor may make or contribute to a prescribing error, ranging from lack of clinical knowledge to tiredness and stress (8).

Firstly, prescribing errors may be compounded by a lack of preparation during undergraduate medical education, as the importance of clinical pharmacology is seemingly diminished as it is replaced by increased emphasis on social sciences within the curriculum (9). A study of 2413 medical students graduating from 25 medical schools from 2006-2008 demonstrates that 74% of the participants felt there was too little clinical education provided and as a result only 38% felt ‘confident’ about prescribing (10). This is reflected in the perceptions of junior doctors and their feelings of a deficiency in prescribing competency and confidence at graduation (11).

Secondly, with an increasingly ageing population, complex polypharmacy regimes are prescribed to cope with multiple co-morbidities (9). These regimes incorporate modern drugs with increasingly intricate modes of action that may far outstretch the knowledge and experience of a junior doctor, leading to a source of potential errors (9).

Other factors that can contribute towards an error being made in acute trust settings can be highly subjective and depend on the working environment in which the junior doctors are practising. These can include stress and fatigue, a demanding workload and poor communication between healthcare professionals (4).

Also, there could be an assumption that a colleague will detect the errors (12), such as a pharmacist reviewing the prescription or nurse administering the medication, and so less care may be taken when completing prescribing duties.

It would be too simplistic to assume that errors occur as a result of a junior doctor experiencing just one complicating factor. In reality it is likely to be a consequence of multiple complications; it is this complexity that makes it difficult to create solutions to prevent future events (4).

2.3 Definition of an error

In undertaking this research, the first challenge was to establish an accepted definition of a prescribing error. The word ‘error’ is an ambiguous and emotive term that can encompass an array of situations. From published papers it can be seen that prescribing errors can be categorised as a form of medication error; however, the literature revealed a lack of consensus regarding the definition of a prescribing error (13). A frequently-used definition is based on the description by Dean (14, 15) which offers a detailed insight into what trusts may identify as a prescribing error: "A clinically meaningful prescribing error occurs when, as a result of a prescribing decision or prescription writing process, there is an unintentional significant 1. reduction in the probability of treatment being timely and effective or 2. increase in the risk of harm when compared with generally accepted practice" (16). An alternative approach is revealed by studies that included information about the types of action that would be considered an error, rather than being solely based on a definition of the term itself (2). These included actions such as the prescriber’s signature missing from a drug chart or an incorrect drug selection (17).

The diversity of definitions and absence of a standardised way of reporting errors leads to problems in determining incidence of prescribing errors (2), thus the importance of the issue and the need for educational interventions are not well understood.

The variable reporting of the incidence of prescribing error in the literature (2) makes determining the extent of the problem complex for acute trusts. This variation can be due to several reasons, including an absence of a structured system to report the errors or due to medical staff being reluctant to divulge their mistakes for fear of repercussions or being unaware of mistakes (18). The problem of reporting errors is one that affects all healthcare professionals. Walton highlighted that colleagues were not willing to attribute an error to a specific team member for fear of damaging inter-professional relationships (19). This demonstrates the need for a blame-free culture to be adopted in the healthcare system (19).

2.4 Junior doctors

The phrase ‘junior doctor’ also requires definition. This term is often applied to many trainee doctors working within acute medical settings without a true understanding of who fulfils this role. There are several stages to a doctor’s post-graduate education (20). As medical graduates, they must undertake a two year foundation course, FY1 and FY2. Each year consists of three four-month rotations around different clinical specialities. Following the successful completion of the two years, full General Medical Council registration will be permitted. Newly registered doctors are then able to specialise for a further three to seven years dependant on the speciality chosen. During these years, doctors are identified as Senior House Officers (SHOs) and they complete Speciality Training (ST) that is classified as ST1, ST2 etc. Since 2005, the completion of the ST years denotes the end of a doctor’s postgraduate education by acquiring a Certificate of Completion of Training signifying they can work at the level of a consultant or general practitioner (21). A review of a number of publications in which junior doctors were the study participants (2, 8) revealed a lack of clarity in the way the title of ‘junior doctor’ is applied. Few studies identified the stage of post graduate educational training of the participants, so it is questionable whether the results of one study can be compared to another.

2.5 Preventing prescribing errors: current practice in acute healthcare settings

There is a large body of published research available, covering many areas including the causes, prevalence and types of errors that occur. However, there is a distinct lack of evidence in the literature regarding the steps being taken within acute care settings to prevent prescribing errors.

Current practices consist of a combination of doctors’ clinical knowledge, recognition of errors and improving competence in prescribing (22).

Educational measures are an essential tool as they can address each of the above factors that contribute to the incidence of prescribing errors. They aim to improve doctors’ competence and should therefore prevent the errors from occurring rather than rectifying errors that have already taken place (23).

2.6 Educational interventions in acute care settings

Evidence in the literature suggests a variety of educational interventions are being used in acute care settings throughout the UK. However, due to the limited number of papers published, it is uncertain how many acute trusts are using such interventions to aid junior doctors with prescribing.

The World Health Organisation has produced a ‘Good Prescribing Guide’ that details a six-step process for junior doctors to follow in order to prescribe safely and effectively. This guide offers a standardised approach to prescribing and clinical examples of how to complete the process (24). It has been shown to have a constructive impact on those using the intervention, demonstrating a lasting effect on the users, not only at a national level of prescribing, but also across an international level (11). However, this impact was only experienced with case scenarios rather than practical prescribing (11).

Tutorials are a form of educational intervention that has been shown to have an impact on junior doctors’ prescribing ability. A controlled study completed by Coombes et al consisted of a series of eight 90 minute problem-based tutorials covering therapeutic topics such as anticoagulants, analgesia and taking an effective drug history (25). A multidisciplinary team including a pharmacist, nurses and a senior doctor delivered the interactive tutorials. During the sessions, a range of activities were required from the junior doctors such as assessing videos, practical prescribing, detection of errors in peers’ work, participation in role plays and discussions. The success of the intervention was assessed and found the tutorial participants had higher scores than those in the control group who did not receive the tutorials (25). The limitation of this study is that the intervention was carried out in Australia where the healthcare system is different from the UK (26); however, the techniques used to educate the junior doctors and the positive results found are still a valid example of an educational intervention used to reduce prescribing errors.

An alternative educational intervention described in the literature details how audit results can be fed back to junior doctors to encourage improved prescribing. Audit feedback is normally given on an impersonal level. However, in this study by Hadjianastassiou et al both personal and peer comparison feedback was given to 11 SHOs. The results of this study demonstrated that this individual advice was significantly more effective in reducing prescribing errors than the traditional departmental approach (27). This may be because individuals do not necessarily identify themselves with the errors being highlighted.

Junior doctors also have access to online teaching and learning resources in the form of e-learning. In a study carried out by Gordon et al, a short computer programme was developed based on the medical undergraduate curriculum containing optional self-assessments that took 1-2 hours to complete. The self-administrated e-learning package was delivered to 169 volunteer junior doctors from the North Western Foundation School (an NHS academic training body) (28). The prescribing competency in paediatrics of the study participants (intervention and control) was assessed pre- and post-intervention; participants also completed a confidence questionnaire. The results found that pre-intervention, there was no noticeable difference between the assessment scores. However, after completing the e-learning pack, the intervention group demonstrated increased ability in prescribing tasks and improved confidence was reflected in the questionnaire (28). This paper, by Gordon et al, like many others (29, 30) reflects the significant impact an educational intervention can have on prescribing errors within a paediatric setting. Prescribing for children is fraught with potentially dangerous errors, hence the abundance of research into aiding junior doctors with this difficult task. Nevertheless, the findings of Gordon et al are transferrable to the adult sector of acute care and have identified a gap in knowledge requiring further research.

A study at the University of Birmingham NHS Foundation Trust used a ‘Junior Doctors Dashboard’ system to provide trainees with individual feedback and peer comparison to determine performance. This controlled trial used an electronic system to monitor junior doctors’ prescribing over a period of four months to see whether they responded to alerts and notices highlighted by the system. These alerts can include warnings of an excessive dose or potentially serious drug interactions (31). This is a further example of how effective feedback can be implemented as an intervention to guide prescribing in a safe manner.

In summary, there are many types of educational intervention currently available to junior doctors within acute care trusts varying in content, nature and duration. Those described in this literature review are examples that provide sufficient detail to confer a realistic idea of how the interventions are utilised.

2.7 Effectiveness of educational interventions

For acute care settings to deliver educational interventions to their prescribing staff, it requires a considerable investment of resources and commitment by all involved. It is therefore essential to determine the effectiveness of the intervention in reducing prescribing errors in practice. One method to judge the effectiveness of an intervention is to use the Kirkpatrick model, which employs a four level evaluation model to determine the success of the intervention used. The first level acknowledges the satisfaction of the participants with the intervention. The second stage enquires whether the participants learned anything from taking part and thirdly, whether the participants utilise the information gained from the intervention in their clinical practice as doctors. Finally, if the intervention has an impact that alters practice across the whole trust, it is deemed to have the highest value (32).

This study will contribute new information to establish the educational practices that acute trusts are providing to their junior doctors in order to improve prescribing competency and reduce the frequency of errors being made.

Aims and objectives

3.1 Aim

The aim of this study was to determine the types, if any, of educational interventions used to reduce prescribing errors within acute care NHS trusts in England. The study focused on interventions provided to junior doctors in teaching and non-teaching acute care trusts.

3.2 Objectives

Four key objectives were defined:

To establish which acute care settings provide educational interventions to junior doctors;

To determine the nature and educational content of the interventions that were used;

To identify when and by whom the educational intervention was delivered;

To ascertain what differences, if any, were present in the educational interventions offered between teaching and non-teaching acute care trusts.

Method

4.1 Literature searches

The publications used to develop the literature review were sourced from a variety of online scientific databases: Pubmed; EMBASE;MEDLINE;OVID. The search terms used to select the most relevant papers included: intervention; junior doctor; educational intervention; acute care setting; error; medication error; prescribing error; e-learning and tutorials. The searches often resulted in over 10,000 results and so an advance search using Boolean logic was employed to combine essential search terms to narrow the vast number of results to more applicable papers.

4.2 Method

Data was collected from respondents using an online questionnaire. This method was selected as the data collection tool for a number of reasons. Firstly, it is a cost effective (33) means to contact a large number of potential respondents with a wide geographical spread (34). It also reduces some forms of bias that may arise if alternative methods are used such as interviews where interviewer bias may affect the respondent’s answer (34). However, it does not eliminate all forms of bias as there is always an inherent risk of the Hawthorne effect when participants fill in a questionnaire. This would result in answers to the questionnaire being filled in by respondents in the way that they perceive the researcher wants it to be completed. Finally, a structured questionnaire produces unambiguous data to allow for simple analysis (34). It was recognised that the respondents would be familiar with screen-based questionnaires as they were recent graduates; it allowed respondents to participate at a time convenient to them and an instant response is gained in comparison to sending the questionnaire by post.

Other possible methods that could have been employed as data collection tools include telephone or face-to-face interviews with stakeholders; audit of the trusts ‘near miss’ log of errors or direct observation of junior doctors’ training (34). These methods were not utilised as they were deemed too time consuming to obtain a representative sample size for the purposes of this study.

The questions were derived following a review of published research regarding errors in trust settings. This research also allowed an understanding of some of the types of educational interventions in use on which to base the questions. From reading the literature, it became apparent that there are many ambiguous definitions of several key terms used within this study, each offering an alternative interpretation. For the purpose of this study, the definition of a prescribing error will be derived from DEAN et al (16). The full definition can be found in appendix B Another term that is essential to define for the purpose of this questionnaire is ‘junior doctors’. The structure of medical professionals is discussed above in the literature review. After assessing this structure it was decided that participants would classify as junior doctors if they are working at any sub consultant grade. This includes Foundation year 1 and 2 medical graduates, senior house officers and registrars. Furthermore, it had to be established what was classified as an educational intervention. It was concluded that trust formularies and guidelines such as antibiotic policy were not an intervention as these are standard procedures that must be followed by all doctors in each trust. Their purpose is to prevent resistance and ineffective prescribing but no specific educational action is taken to reduce prescribing errors after the distribution of such material.

Following the clarification of the terms, a questionnaire was developed based on evidence collated from multiple publications. Questions were included at the beginning of the questionnaire to determine generic information regarding the details of the respondent and the trust they worked for which allowed for easy identification of respondents when analysing the results. The questions were then grouped into sections based on the type educational intervention. This included e-learning, teaching sessions, study guide and assessment. A concluding page of questions was incorporated to gain more qualitative data from the respondents, covering aspects of the medical education such as feedback to the junior doctors, the participant’s opinions regarding the effectiveness of any interventions undertaken and if this is supported by any quantitative data the trust may collect.

The questionnaire was developed using Selectsurvey.Net, an online programme affiliated with the University of Manchester. This programme permitted the survey to be sent as a link within an email to the participant’s containing a brief cover letter, a participant information leaflet and a list of definitions of ambiguous terms. By completing the survey the participants offered implied consent, thus no consent forms were sent out.

Before the questionnaire was sent out to the intended participants, the questionnaire was reviewed by Dr. Morris Gordon, a paediatric consultant with experience in this area of research. After evaluating the questionnaire, a number of suggestions were given to make the questionnaire more user-friendly. Subsequently, some minor alterations were made after considering the advice given and the questionnaire was piloted at Central Manchester Trust via Dr. Penny Lewis, the project supervisor.

The survey was made available for two months to allow the participants sufficient time to respond. After three weeks, a reminder was sent to the participants who had not responded to the email invitation asking them to complete the survey. A second reminder was not sent as the original intended respondent often forwarded the questionnaire to a more relevant member of staff who could complete it more accurately. A copy of the questionnaire, cover letter, participant information leaflet and reminder email can be found in appendices A, B and C.

4.3 Participant sample

The questionnaire was sent to two members of staff at each acute care trust within England. The reason for the inclusion of trusts just from England was due to the time restriction allocated to the research study. It would not have been practical to contact all trusts in the United Kingdom to collect and analyse a reliable set of data within the short time frame. Although the questionnaire is centred on junior doctor practice, the survey was sent to the Chief Pharmacist and the Director of Post Graduate Education within each trust. This was to ensure accurate responses to the questions regarding the education of doctors and any future plans to develop an educational scheme by the personnel responsible for the administration and delivery. The survey was sent to 164 acute care trusts thus, the total number of potential participants totalled 328.

Findings

This research was conducted as a group project. Therefore, not all aspects of the findings are covered in depth in this report. The areas of analysis that will be focused on in this project are e-learning, assessment and the open questions that concluded the questionnaire.

5.1 Response rate

The questionnaire was made available to participants from December 2012 to January 2013. After this time, the questionnaire was closed. All responses were then collated and the data analysed. A total of 29.3% (95) responses were collected from the online survey tool. As stated in the methods, the questionnaire was sent to two members of staff at each trust to increase the likelihood of a response. This resulted in two responses being received from 10 trusts. For these trusts, the responses were combined to create one comprehensive response from the trust. Therefore, 85 questionnaire responses from different trusts were taken forward for analysis resulting in a 25.9% response rate from the original 328 questionnaires sent out.

5.2 Demographics of Respondents

A question was included to identify the responding trust. With this information, it was possible to see which of the responses were from teaching trusts. It was noted that teaching trusts accounted for 31.7% (27) out of the 85 responses. The remaining 68.2% (58) were from non-teaching trusts. However, the identity of the trusts was not reported in the findings.

The questionnaire contained an introductory question to identify the occupation of the respondent. This enabled classification of which type of professionals filled in the questionnaires. 57.6% (49) questionnaires were completed by members of the pharmacy profession. 16.5% (14) respondents stated that they were the ‘medical education director’ or similar and 25.9% (22) respondents described other professions such as a grade of doctor or clinical tutor.

5.3 Educational interventions provided to junior doctors

From the responses, only two non-teaching trusts were identified as not providing any educational interventions to their junior doctors to help reduce prescribing errors. The two trusts expanded further, identifying that one respondent was unsure if the trust has future plans to develop any kind of educational intervention and the other divulged that the trust was in the final stage of developing a medicines management pack for all staff involved in medicine delivery. As the aim of this research was to investigate educational interventions, only the 83 trusts that offered an educational intervention to prescribing staff were considered further in the findings.

The questionnaire asked respondents to identify all types of teaching intervention that were offered within the trust: e-learning, assessment, teaching session, study guide. Descriptions of each type of intervention were provided with the questionnaire to ensure a consistent response. These descriptions can be found on the participant information leaflet in appendix B. Respondents were not limited to selecting one intervention type. If the trust offered multiple educational interventions, they were encouraged to tick all options that applied to their trust.

Of the 83 trusts, 9.6% (8) acute trusts assigned study guides to junior doctors with the aim of reducing prescribing errors. 62.6% (52) trusts assessed their junior doctors’ prescribing competence in some manner and 69.9% (58) trusts offered an e-learning package. By far the most common form of educational intervention among trusts was face-to-face teaching sessions. The majority of trusts, 93.9% (78), taught the junior doctors in a varying number of sessions and delivered by a multitude of health professionals.

5.4 Educational interventions: combinations provided to junior doctors

By analysing the choices of interventions provided, it was possible to determine the particular combinations offered by each trust.

Table 1 Combinations of educational interventions in use in teaching and non-teaching trusts

Combination of educational interventions

Non-Teaching Trust (n=56)

Teaching Trust (n=27)

Total

E-learning

2

0

2

E-learning, Assessment

1

3

4

E-learning, (face to face) Teaching Session

11

4

15

E-learning, Teaching Session, Assessment

15

10

25

E-learning, Teaching Session, Assessment, Other

1

1

2

E-learning, Teaching Session, Other

1

1

2

E-learning, Teaching Session, Study Guide, Assessment

5

2

7

Teaching Session

10

1

11

Teaching Session, Assessment

6

3

9

Teaching Session, Assessment, Other

2

1

3

Teaching Session, Other

2

0

2

Teaching Session, Study Guide, Assessment

0

1

1

It is clear that the modal combination is e-learning, teaching sessions and assessment, followed e-learning and teaching sessions. These findings are similar for both teaching and non-teaching trusts. However, it is apparent that there is no standard provision for educational intervention across all trusts as there is an extensive range of combinations being offered to junior doctors.

The participants were then asked to provide more detail regarding the educational interventions offered by their trust. The data extracted from these more detailed questions clarified the content of the intervention delivered and such factors as the grade of junior doctors who were required to partake and the number of sessions that were delivered. These are discussed below.

5.5 E-learning

As shown above, 70.0% of the 83 trusts (58) offered e-learning as an educational intervention. Subdividing by trust type showed that 64.9% of the 58 (37) non-teaching trusts and 77.7% of the 27 (21) teaching trusts offered e-learning. The proportion of trusts in each class choosing e-learning was compared using Fisher’s test. A two by two contingency table was used to calculate the P value of 0.22, thus there was not a significant difference between the two classes of trust. This calculation was chosen as it provided an exact P value rather than an estimated value that would have been produced by conducting a Chi square test and is more appropriate for a small data sample (35).

It was important to determine which grades of junior doctor were offered the intervention. The trusts were invited to select as many relevant options as applicable, thus the total number of responses for this question was greater than the initial 58 responses that selected e-learning as an intervention. Of the 152 responses indicating that prescribing staff were provided with an e-learning package, 27.6% (42) identified that this was provided to FY1 doctors, 25.0% (38) to FY2 doctors, 17.1% (26) to SHOs, 13.8% (21) to staff grades and 16.5% (25) to registrars. These findings can be sub divided to provide a more detailed description of this area of e-learning. By analysing the data, it was found that 41.4% (24) of trusts offered e-learning to lower grades of doctor (FY 1&2) only. No trust was found to offer e-learning to higher grades (SHO, staff grades and registrars) alone. A total of 31.0% (18) of trusts offered the training to all grades of doctor. Of this 18, 9 (33.3%) were teaching trusts and 9 (16.1%) were non-teaching trusts. Therefore, this shows that a greater proportion of the total teaching trusts (27) offer e-learning as an intervention. This finding is illustrated by Figure 1 below.

Figure 1. Proportions of trusts providing the e-learning package to all grades of junior doctors

In contrast to other sections of the questionnaire, the identity of the member of staff who delivered the intervention was not required. This was because an e-learning package was deemed as a self-administered educational intervention.

It was considered important to establish when the e-learning package was made available to the junior doctors and when they were expected to have completed the material. This would allow the time limit given to the junior doctors to finish the work to be considered. Only a small number of options were made available to the respondent to answer when the intervention was given. This was due to the fact that trusts were likely to have different educational programmes and there are many ways to deliver such interventions. It would be too complex to offer options that would reflect all trusts’ provisions accurately, therefore an ‘other’ option was utilised for this question. A total of 64 responses indicated that trusts offered e-learning packages to junior doctors at multiple stages of their employment. 73.4% (47) of responses showed that the package was offered at the start of employment; 6.3% (4) made it available at the start of a new rotation and 20.3% (13) chose ‘other’. Of the four trusts that identified that e-learning was offered at the start of rotations, three of these trusts also provided the package at the start of employment.

A further question was asked to ascertain how much time the junior doctors were allowed to complete the educational intervention package. There were 62 responses to this question. Once again a limited number of options were available for selection due to the range of possible responses. 50.0% (31) of junior doctors were required to have completed the e-learning package during the induction period. Only 9.7% (6) had to complete it by the end of a rotation, 11.3% (7) by the end of the year and 29.0% (18) chose ‘other’.

Consideration had to be given to the fact that that the response to these two questions may be dependent on the grade of doctor at whom the e-learning package was aimed. Therefore, a further question to investigate this aspect was included. Very few respondents disclosed that the package was made available at the start of a new rotation or had to be completed by the end of a rotation. These options may have only been relevant to the foundation year doctors (as at this point in their career, rotation around different clinical areas is required) compared to more senior doctors who specialise in a particular medical field.

The findings were then scrutinised to determine the educational content of the e-learning package.

Figure 2. The educational topics covered in the e-learning package differentiated by teaching (n=73) and non-teaching (n=153) trusts

It can be seen that the data was divided into the topics delivered by teaching and non-teaching trusts. A total of 90.0% (52) out of the 58 trusts that offered e-learning answered the question regarding the educational content of the intervention. Analysing this figure further, 86.0% (18) of the 21 teaching trusts chose to answer this question and 92.0% (34) of the 37 non-teaching trusts answered.

In both classes of trust, teaching and non-teaching, e-learning packages consisted of similar combinations of educational topics (e.g. common errors, dosage calculations, completing of drug charts) highlighted by figure 2. This also suggests that few trusts offered additional material other than the topics identified on the questionnaire as only 5.5% (4) responses from teaching trusts and 1.3% (2) responses from non-teaching trust chose ‘other’ and disclosed further detail. After analysing the findings from non-teaching trusts, the total response rate to this question was 153. Using the data previously calculated concerning the number of trusts that chose to answer this question, a mean of 4.5 educational topics was selected by each non-teaching trusts answering this question. This is in contrast to a mean of 2.4 educational topics selected by teaching trusts.

Another question of interest when considering the difference between teaching and non-teaching trusts’ e-learning packages is how many sessions of the intervention were provided. Once again, 52 participants answered this question, with the same ratio of teaching and non-teaching trusts as above.