Services on Demand
Journal
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
SAMJ: South African Medical Journal
On-line version ISSN 2078-5135Print version ISSN 0256-9574
SAMJ, S. Afr. med. j. vol.108 n.7 Pretoria Jul. 2018
https://doi.org/10.7196/samj.2018.v108i7.12969
RESEARCH
Describing key performance indicators for waiting times in emergency centres in the Western Cape Province, South Africa, between 2013 and 2014
K CohenI; S BruijnsII
IMB ChB, MMed, MPhil; Division of Emergency Medicine, Faculty of Health Sciences, University of Cape Town, South Africa
IIMB ChB, DipPEC, MPhil, FRCEM, PhD Division of Emergency Medicine, Faculty of Health Sciences, University of Cape Town, South Africa
ABSTRACT
BACKGROUND. Data measured as key performance indicators (KPIs) are used internationally in emergency medicine to measure and monitor quality of care. The Department of Health in the Western Cape Province, South Africa, introduced time-based KPIs for emergency centres (ECs) in 2012.
OBJECTIVES. To describe the most recently processed results of the audits conducted in Western Cape ECs between 2013 and 2014.
METHODS. A retrospective, descriptive study was conducted on data collected in the 6-monthly Western Cape EC triage and waiting time audits for 2013 - 2014. Time variables were analysed overall and per triage category. ECs in hospitals were compared with ECs in community health centres (CHCs). A descriptive analysis of the sample was undertaken. Proportions for categorical data are presented throughout. The continuous variable time was described using means and standard deviations. The χ2 test and Fisher's exact test were used to describe associations. The level of significance was p<0.05, with the 95% confidence interval where appropriate.
RESULTS. There was no significant difference in triage acuity proportions between hospital and CHC ECs. Waiting times were longer than recommended for the South African Triage Scale, but higher-acuity patients were seen faster than lower-acuity patients. Waiting times were significantly longer at hospitals than at CHCs. A red priority patient presenting to a CHC would take 6.1 times longer to reach definitive care than if the patient had presented to a hospital EC.
CONCLUSIONS. The triage process appears to improve waiting times for the sickest patients, although it is protracted throughout. Acutely ill patient journeys starting at CHC ECs suggested significant delays in care. Models need to be explored that allow appropriate care at the first point of contact and rapid transfer if needed. To improve waiting times, resource allocation in the emergency care system will need to be reconsidered.
Health systems globally are under pressure from growing populations, increasing medical costs and increasing patient expectations. Resource limitations dictate that high-quality care must be balanced with cost-effectiveness.1,2 Data in the form of key performance indicators (KPIs) are used in emergency medicine (EM) to measure and monitor quality of care. This helps both managers and clinicians determine priorities, guide resource allocation and improve quality of care. Quality healthcare can be defined as 'the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge'.3 Patients may not be informed judges of the technical aspects of quality medical care, but have expectations related to the service aspect. We therefore see that quality healthcare is multifaceted.
The doors of an emergency centre (EC) remain open at all times to all comers, and it is therefore essential that the patient's journey through the EC be streamlined. Management, decision-making and disposition should be efficient and timely, so EM needs to be heavily process driven. Measuring hard clinical outcomes such as mortality and morbidity in the EC setting is challenging in that these can only be measured at the end of the patient journey; singling out the EC portion of this journey is difficult, because of the involvement of different service platforms and other specialty departments.4 The Department of Health in the Western Cape Province, South Africa (SA), has adopted the Institute of Medicine's framework to conceptualise quality healthcare, where quality is considered in the following domains: safety, effectiveness, patient centred, timely, efficient, equitable and sustainable.1
Performance indicators are one way of measuring quality in the EC. These can be structure, process or outcomes based. Structure-based indicators relate to resources needed to run a service such as infrastructure and staffing, process indicators relate to the activities that are involved in managing patients, and outcome indicators measure the outcome after management of the individual.4 Most EM KPIs are process based, serving as proxies to hard clinical outcomes.3-5 A Delphi study (conducted in SA in 2010) confirmed that most feasible and useful KPIs in EM are either structure or process based, with a fair portion listed as time-based KPIs.6The International Federation of Emergency Medicine in 2014 also suggested that time-based process measures were an important determinant of a quality framework.7 In terms of quality healthcare, timeliness essentially translates to acceptable waiting times for assessment, management and disposition of patients, to avoid harm from delayed care as well as patient discomfort. It has been shown that timely triage saves lives; the measures of time from arrival to triage, triage to healthcare professional and EC to ward for admitted patients, and overcrowding, correlate with mortality outcomes.8,9Elsewhere, evidence-based guidelines stress time sensitivity in many emergency clinical conditions, e.g. time to antibiotics and fluids, time to thrombolysis and time to analgesia.10-12 Moreover, patients expect timely management of their condition. Internationally there is a major emphasis on waiting times, specifically related to the various stages of the patient journey through the EC.4,5,7,12-14
Measuring waiting times is not routine practice in most SA hospitals. The Western Cape Department of Health introduced time-based KPIs for the EC in 2012 as part of its provincial annual operational measures. These measures were set to represent different portions of the EC patient journey, to get a clear representation of the times involved at each step. These were time from arrival to triage, time from triage to healthcare provider, time from healthcare provider to disposition decision, and time from disposition decision to leaving the EC. Dedicated waiting time audit templates were developed along these KPIs.
Objectives
To describe the most recently processed results of all biannual triage and waiting time audits conducted in Western Cape ECs between 2013 and 2014.
Methods
A retrospective, descriptive study was conducted on data collected as part of the 6-monthly Western Cape EC triage audits conducted at healthcare facilities with 24-hour ECs in the province for the years 2013 and 2014. Audits were performed at central, regional and district hospital ECs, as well as ECs at 24-hour community health centres (CHCs). District hospitals tend to provide generalist services (mainly operated through family medicine) at a secondary care level. In addition to the generalist services provided by district hospitals, regional hospitals provide general specialist care, while central (or tertiary) hospitals provide subspecialist care in addition to general specialist care. The CHCs are essentially 24-hour primary care facilities, and although they have dedicated ECs, there are no inpatient services. The healthcare provider depends on the level of the healthcare facility and may be a doctor or a clinical nurse practitioner.
An audit starts by including 100 random patient folders obtained from the preceding month at a single facility EC (collection 1). The selection is made by the ward clerk and randomisation is therefore not consistent. This is then sorted into triage categories (red, orange, yellow and green) by a senior clinician or a lead triage nurse working in the EC and supplemented by additional folders until all four triage categories contain a minimum of 30 cases (collection 2). As a result, audits often contain in excess of the required minimum of 120 cases. Each clinical record is then evaluated by the senior clinician or a lead triage nurse for triage accuracy. In addition, arrival time, triage time, first healthcare provider's consultation time, referral time and disposition time are extracted. The time-related variables are collected where present in the clinical record, providing an indirect reflection of record keeping. Patient identifiable data are not collected by the audit. Data are then transcribed onto a dedicated, electronic audit template. The audit is then submitted to the general specialist head for EM, who analyses the data and provides feedback to the various facilities. Audit data are stored in a database registered with the Human Research Ethics Committee of the University of Cape Town (ref. no. R056/2014). Permission was obtained from the committee to analyse the data for this study.
A descriptive analysis of the sample was undertaken. The continuous variable time was described using means and standard deviations. Proportions for categorical data were presented throughout. The triage category breakdown for each facility was derived from the initial collection of 100 folders (collection 1). Collection 2 was used for the rest of the calculations. Time variables were analysed overall and per triage category, and the ECs from hospitals were compared with the ECs from CHCs. The χ2 test and Fisher's exact test (depending on group sizes) were used to compare different categorical data groups. The level of significance was p<0.05, with the 95% confidence interval (CI) where appropriate.
Results
During the sample period, 60 audits were submitted. Of these, two were excluded owing to corrupted data. Six audits from a further two facilities were excluded because these two facilities did not identify as either a CHC or a hospital, but as hybrid CHCs/hospitals owing to mixed patient flow, processes and admitting practices. The remaining 52 audits were analysed. A total of 7 899 patient folders were analysed across all the remaining ECs. Of the 7 899 patient folders analysed, the corrected triage acuity breakdown of the sample, after evaluation by the senior clinician, was as follows: red 1 275 (16.1%), orange 1 882 (23.8%), yellow 2 691 (34.1%) and green 1 709 (21.7%); triage data were missing in 342 cases (4.3%). Data were missing for 16 folders, and triage was unassigned for 326 patients (4.3%). Triage accuracy across the sample was 83.2%. A total of 7 126 patient folders were analysed for the comparison between hospital-and CHC-based services, 3 842 (53.9%) from hospital-based ECs and 3 284 (46.1%) from clinic-based CHCs. There was no significant difference for the triage acuities reported for the first 100 folders (collection 1) between hospital and CHC ECs (p=0.33) (Table 1).
Time intervals for arrival to triage, triage to first healthcare provider, first healthcare provider to disposition decision and disposition decision to departure, and time in the EC overall, for hospital ECs compared with CHC ECs are presented in Table 2. The differences in 95% CIs indicated that triage to first healthcare provider, first healthcare provider to disposition decision, disposition decision to departure, and time in the EC overall were significantly longer at hospitals.
Time intervals for arrival to triage, triage to first healthcare provider, first healthcare provider to disposition decision, disposition decision to departure, and time in the EC overall between the hospital compared with the CHC per triage acuity category are presented in Table 3. The 95% CIs indicated that arrival to triage intervals were significantly longer for yellow patients at hospitals; triage to first healthcare provider intervals were significantly longer for orange, yellow and green patients at hospitals; first healthcare provider to disposition decision intervals were significantly longer for all priorities at hospitals; disposition decision to departure intervals were significantly longer for all priorities at hospitals; and times in the EC overall were significantly longer for orange, yellow and green patients. If a red priority patient was first seen at a CHC and required transfer for further care, the cumulated time to see the first healthcare provider at the hospital using these figures would be 7 hours and 25 minutes (excluding transfer time and hand-over), or 6.1 times longer than if the patient had presented first to the hospital EC (Fig. 1).
Discussion
A key finding of this audit was that the proportional acuity between hospitals and CHCs for the first random 100 folders did not differ statistically. CHCs were never intended or resourced to deal with acuity in such proportions. Current provincial policies dictate that sicker patients should be seen at hospitals and not at CHCs, as definitive care cannot be provided safely for most high-priority patients attending CHCs. Not only would the volumes outstrip local resources, but the waiting time to definitive care would effectively increase to the total stay at the CHC, plus the transfer time, plus the arrival to first healthcare provider's consultation time. Even without transfer time, this could amount to a >7-hour delay for red patients. Such a substantial delay to reach definitive care is not only inappropriate and unsafe but also exposes staff to unnecessary personal and legal risk. Currently, emergency medical services have policies in place to ensure that high-priority patients bypass CHCs and directly attend hospital ECs. This does not account for private transport, however. The audit did not include detail on method of transport, which will require a thorough review to identify areas for improvement. We agree that the sampling method weakens the argument regarding sampling proportions and that a consecutive sample would have provided better measures. This is a weakness of the formal audit methodology. That said, the sampling was universally applied at both hospitals and CHCs. As such, relative waiting times would be largely unaffected by this weakness.
Although the mean time from arrival to triage for all comers across all facilities was just under an hour, higher-acuity patients were triaged significantly faster (under or around half an hour) than lower-acuity patients (an hour or more), as shown by comparing confidence intervals. This difference was significant for hospitals, although a similar but non-significant trend was observed for CHCs. Several reasons could account for this, including visible, severe pathology, persistence of bystanders or relatives for care to be expedited, or experience of triage staff. Oddly, at hospitals, green patients were triaged significantly faster than the higher-priority yellow patients; this was not the case at CHCs. This finding suggests that the process that accounted for higher-priority patients to be expedited to triage became less specific as priority reduced. It also suggests that there was more to triage than simply applying the South African Triage Scale (SATS) to all comers. This may reflect an issue with training and will need further study.
The mean time from triage to a healthcare provider consultation for all comers, across all facilities (>2 hours), was significantly longer at hospitals than at CHCs. Waiting times per priority were universally longer than the recommended times to healthcare provider consultation in the SATS, which are immediate for red patients, 10 minutes for orange, 60 minutes for yellow and 240 minutes for green.12 Although these KPIs were not met, higher-acuity cases were seen faster than lower-acuity cases in a stepwise fashion, which represents a partial accomplishment of the triage goals through sorting. It is concerning, however, that the highest-acuity (red) patients waited nearly an hour on average for a healthcare provider consultation and orange patients had to wait between an hour and 2 hours. Patients waited significantly longer to see a healthcare provider at hospitals than at CHCs in all triage categories except the highest-acuity category (red). Many factors can account for these findings, although the most likely is probably related to a high patient-to-clinician ratio. In 2013, the World Health Organization reported the number of physicians per 1 000 population to be 0.8 for SA compared with 2.8 for the UK.15Anecdotally, crowding and access block present significant barriers to safe and efficient patient care locally. Unfortunately, these variables are poorly described in local literature. Nevertheless, the findings fit well with poorly resourced ECs, overburdened by large patient numbers. Although the SATS appears to be effective in prioritising care, ECs fail to provide emergency care in a timely fashion, probably for resource-related reasons. It would be interesting to see how other SA cities would fare in similar studies. Since the SATS has only been formally rolled out in the Western Cape, it can be assumed that the situation is likely to be worse.
The mean time from assessment and management to a disposition decision by a doctor for all comers was significantly longer at hospitals than at CHCs across all priorities. The lowest priority, green patients, took a lot longer to be dealt with at hospitals than at CHCs. Since green patients probably had a similar lack of need for further investigations at both hospital and CHC ECs, it may be that clinical priority at hospitals (given resource constraints) was shifted upwards and green patients simply waited longer at hospitals because higher-acuity patients were prioritised. Conversely, at CHCs less time was spent with sicker patients, given even more resource constraints limiting interventions and the prospect of transfer to definitive care. Patient workups took longer at hospitals, probably because of the specialised care and investigations available there that are not available at CHCs. A similar pattern was seen for the disposition decision to leaving time. The mean total time in the EC was significantly longer at hospitals than at clinics. Orange, yellow and green cases stayed significantly longer at hospitals, with red cases also staying longer at hospitals, though this was not significant. Alarmingly, red cases appeared to stay the longest at CHCs, arguably because they had to wait for transfer to secondary care. As mentioned earlier, when the large proportion of red patients seen at CHCs is considered, as well as transport times and waiting times at the hospital, serious concerns arise about the current safety of high-acuity patient journeys from CHC to definitive care.
Study limitations
The sample size was not compared with actual patient volumes at each facility, and this should be the focus of future research to validate these findings. Although it would have been ideal to have done this, restricted resources and the design of the audit did not allow for it. There were reported challenges in data collection, as documentation in the clinical records at facility level was reported to be poor overall. As a result, several facilities did not submit complete datasets and a significant number of data points were not captured or were missing. Arrival time was reportedly the least collected variable. We have commented on the lack of random sampling earlier. Despite these sampling errors, this dataset provides the best look at local public sector EC acuity reported to date. Measures to improve data collection and data quality should be explored and implemented to improve future data collection in general research and audit. For instance, separate, systematic random sampling for the second collection (instead of the top-up technique employed for the audit in this study) will improve representivity and precision of the sample. Implementation of an electronic record would help these limitations.
Conclusions
Although waiting times before being seen by a healthcare provider were universally longer than those recommended by the SATS, higher-acuity patients were seen sooner than lower-acuity patients. The triage process therefore appears to improve waiting times for the sickest patients. However, there are still unacceptably long waiting times before high-acuity patients are seen by a healthcare provider at all levels of care. Improvement in processes contributing to the flow of EC patients is needed to reduce waiting times as recommended for the SATS, with a focus on the high-acuity patients. This will require a bold effort from the cash-strapped Western Cape Government, as the purpose of audit would be to lead to improvement. The hidden finding of delayed waiting times for those high-acuity patients who attended their CHCs should probably become a key focus for quality and safety improvement. To unpack this further, one would need to look at individual models of CHCs and their referral hospitals, as each CHC has unique characteristics in terms of patient demographics, disease characteristics, resources and staff skills. Models need to be explored that allow patients to receive appropriate care at first point of contact, with rapid transfer should the need arise. Finally, a process plan for addressing these objectives should be strongly considered, alongside regular re-audits on agreed timelines to evaluate progress.
Acknowledgements. The authors wish to acknowledge the Western Cape Department of Heath for the use of the database and the many front-line workers who collected the data for the audits at their individual healthcare facilities. We also acknowledge Dr Heather Tuffin, who is credited with developing the audit instrument. Facilities that contributed to the audit were Groote Schuur Hospital, Tygerberg Hospital, New Somerset Hospital, Paarl Hospital, Worcester Hospital, George Hospital, Victoria Hospital, Eerste River Hospital, Khayelitsha District Hospital, Mitchell's Plain District Hospital, Helderberg Hospital and Karl Bremer Hospital, and Delft, Elsie's River, Gugulethu, Hanover Park, Kraaifontein, Retreat, Mitchell's Plain, Khayelitsha Site B, Vanguard and Heideveld CHCs.
Author contributions. Both authors contributed to the conception and design of the work. KC acquired the data and performed the initial analysis. Both authors interpreted the data. KC produced the first draft and both authors critically revised this for important intellectual content. Both authors approved the final version to be published and agreed to be accountable for all aspects of the work.
Funding. None.
Conflicts of interest. None.
References
1. Baker A. Crossing the quality chasm: A new health system for the 21st century. BMJ 2001;323:1192. https://doi.org/10.1136/bmj.323.7322.1192 [ Links ]
2. Schuur JD, Hsia RY, Burstin H, Schull MJ, Pines JM. Quality measurement in the emergency department: Past and future. Health Aff 2013;32(12):2129-2138. https://doi.org/10.1377/hlthaff.2013.0730 [ Links ]
3. Lindsay P, Schull M, Bronskill S, Anderson G. The development of indicators to measure the quality of clinical care in emergency departments following a modified-Delphi approach. Acad Emerg Med 2002;9(11):1131-1139. https://doi.org/10.1197/aemj.9.11.1131 [ Links ]
4. Beattie EM-JK. A Delphi study to identify performance indicators for emergency medicine. Emerg Med J 2004;21(1):47-50. https://doi.org/10.1136/emj.2003.001123 [ Links ]
5. Wiler JL, Welch S, Pines J, Schuur J, Jouriles N, Stone-Griffith S. Emergency department performance measures updates: Proceedings of the 2014 emergency department benchmarking alliance consensus summit. Acad Emerg Med 2015;22(5):542-553. https://doi.org/10.1111/acem.12654 [ Links ]
6. Maritz D, Hodkinson P, Wallis L. Identification of performance indicators for emergency centres in South Africa: Results of a Delphi study. Int J Emerg Med 2010;3(4):341-349. https://doi.org/10.1007/s12245-010-0240-6 [ Links ]
7. Lecky F, Benger J, Mason S, Cameron P, Walsh C. The International Federation for Emergency Medicine framework for quality and safety in the emergency department. Emerg Med J 2014;31(11):926-929. https://doi.org/10.1136/emermed-2013-203000 [ Links ]
8. Bruijns SR, Wallis LA, Burch VC. Effect of introduction of nurse triage on waiting times in a South African emergency department. Emerg Med J 2008;25(7):395-397. https://doi.org/10.1136/emj.2007.049411 [ Links ]
9. Cooke MW. Intelligent use of indicators and targets to improve emergency care. Emerg Med J 2014;31(1):5-6. https://doi.org/10.1136/emermed-2013-202391 [ Links ]
10. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: International guidelines for management of severe sepsis and septic shock, 2012. Intensive Care Med 2013;39(2):165-228. https://doi.org/10.1007/s00134-012-2769-8 [ Links ]
11. O'Connor RE, Al Ali AS, Brady WJ, et al. Part 9: Acute coronary syndromes: 2015 American Heart Association guidelines update for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation 2015;132(18 Suppl 2): S483-S500. https://doi.org/10.1161/CIR.0000000000000263 [ Links ]
12. Stang AS, Hartling L, Fera C, Johnson D, Ali S. Quality indicators for the assessment and management of pain in the emergency department: A systematic review. Pain Res Manag 2014;19(6):179-190. https://doi.org/10.1155/2014/269140 [ Links ]
13. Madsen MM, Eiset AH, Mackenhauer J, et al. Selection of quality indicators for hospital-based emergency care in Denmark, informed by a modified-Delphi process. Scand J Trauma Resusc Emerg Med 2016;24(1):11-19. https://doi.org/10.1186/s13049-016-0203-x [ Links ]
14. Aaronson E, Marsh R, Guha M, Schuur J, Rouhani S. ED quality and safety indicators in resource-limited settings: An environmental survey. Int J Emerg Med 2015;8:39. https://doi.org/10.1186/s12245-015-0088-x [ Links ]
15. World Health Organization. World Health Statistics. 2013. http://apps.who.int/iris/bitstream/10665/81965/1/9789241564588_eng.pdf (accessed 3 June 2017). [ Links ]
Correspondence:
K Cohen
kirstenlcohen@gmail.com
Accepted 12 December 2017