By Elizabeth A. Criss, RN, MEd
"EMS still lacks meaningful data that demonstrates the effect of out-of-hospital care on illness and trauma."
"If we enter these new practices without a plan for evaluating their outcomes, safety, and cost-effectiveness, we are doomed to developing a system that lags behind the standard of care."
We have come a long way since the Emergency Medical Services Systems Act (EMSSA) of 19731 that formally organized EMS and standardized training for prehospital personnel. The goal now as it was then is to provide therapeutic interventions in the first 30 minutes of any emergency that will improve the final outcome for the patient. Today, in addition to providing emergency response, some of us participate in home health care or patient follow-up visits after hospitalization. In some parts of the United States, we may be the only health care available for an extended period of time due to terrain or population density. But despite this growth in size and responsibilities, EMS still lacks meaningful data that demonstrate the effect of out-of-hospital care on illness or trauma. It was not until 1991 (nearly 20 years after EMSSA) that a standardized data set for review and comparison of cardiac arrests was even established.2 Now at least this one aspect of out-of-hospital care can be uniformly evaluated and patient outcomes adequately compared. This same type of information gathering exists for no other illness or injury, making system and treatment comparisons difficult and sometimes impossible.
Dr. Ron Stewart, one of the first and foremost EMS researchers, stated in a 1983 editorial that the time had come for our "initiative and innovative spirit" to solve the problems of EMS.3 He went on to say that "if our methods and techniques are not changed to conform to what is medically needed, EMS as we know it will fast fade from the medical scene." Fifteen years have passed since Dr. Stewart published that challenge and we are only a little closer to answering the important questions that relate to system design, the effectiveness of trauma care or the impact of ALS in urban, suburban and especially rural environments.
The answers to these and other pressing questions come from research. The problem lies in the fact that some of the needed research requires us to activate an "innovative spirit" and adapt or develop new research methodologies. Presented here is a discussion on some of the problems encountered using current methodologies, opportunities that we have now and models for us to use in developing evaluations in the future.
environment where others carefully evaluate various components of a problem. In general, this is the traditional research model used in medicine. It can best be described as component-based, disease-specific and specialty-dominated.4 What this means is that (in the clinical model) research is generally conducted by experts on a specific disease process. They focus on a single treatment option and carefully control the environment to best understand results.
This clinical model, or component research, depends on the development of focused, directed questions that require collection of minimal data. Because the questions are so focused, researchers often collect data themselves or use minimal additional personnel. The research project often involves one medical specialty and is only conducted in a limited number of sites to control all the factors. Information fathered from the project is reliable and highly accurate. The desired outcome is easily defined. Given this description, it is not difficult to see how this does not translate well to the uncontrolled, multi-tasking EMS environment. Unfortunately, the use of this research model has led to inaccurate conclusions from studies conducted in the out-of-hospital environment.
An example of attempting to utilize component-based research is the "zero-time" IV study by OGorman et al.5 In this study, the authors wanted to know if starting an IV caused delays in patient transport. Their first step was to compare the success rates of IVs initiated in the field with those initiated en route to the hospital. Finding no difference in these groups, the authors concluded that in order to prevent additional time being added to a scene all IVs should be initiated en route. There is a problem with this "global" conclusion: No comparison was made of on-scene times for either group. So it is not possible to know if IVs were the cause of transport delays. By focusing on the component of IV initiation, the authors failed to account for additional patient-care activities that may cause delays. This study was widely accepted by many trauma surgeons who urged banning IVs outside the hospital.
Health care is on the verge of reform. The influx of managed care has necessitated that all of medicine re-evaluate the way patient care is provided. One of the ways EMS is meeting the challenges of a more fiscally responsible, customer service-oriented climate is by expanding its scope thing is for certain: If we enter into these new practices without a plan for evaluating their outcomes, safety and cost-effectiveness, we are doomed to develop a system that lags behind the standard of care. We have an opportunity to develop unique, prospective research models that can provide us with the information necessary to defend our practice both medically and financially.
There are many ways to evaluate expanded-service EMS. No matter the methodology, the goal should be to develop definable outcomes and cost of services so as to determine the overall effect on society. For example, one alternative is for all system or agency providers to take on expanded-service roles. In this model, the process would begin with an evaluation to determine current system effectiveness. Since cardiac arrest is the only illness for which uniform outcome measures exist, then it is reasonable to use that as the measure of effectiveness. An article by Spaite et al. describes three basic system types that can be used in building this model.6
The first system is one in which the rate of survival from cardiac arrest is known and monitored. These systems have done methodologically sound research, proven their benefit and published their findings. When this type of system implements expanded service it will be able to assess the effect on out-of-hospital cardiac arrest. The information will also make possible discussions on the cost effectiveness of system changes, especially as they relate to overall morbidity and mortality.
The next system is not sure it makes a difference in cardiac arrest. There are anecdotal reports of success but the methodologically sound, peer-reviewed research has not been done. Before these systems consider entering expanded service, they should attempt to analyze their cardiac arrest survival rate, otherwise they may make a costly leap forward at the expense of part of their community.
The last type of system is one that knows they have little or no influence on cardiac arrest; cities such as New York, Chicago, and many rural environments are perfect examples. These systems have features that make it extremely unlikely they will ever positively affect a change in the cardiac arrest rate -- including geography, population density, climate, or resource limitations. Understanding these limitations, these systems could decide that entering expanded service is the most appropriate alternative to current attempts at providing emergency care. For them, providing alternative interventions may be a more cost-effective way to manage out-of-hospital care.
expanded service involves adding the expanded service to only a single component of an already functioning system (i.e., nurse practitioner, physicians assistant). This could be thought of as using a modified component-based research model to evaluate a systems issue. The activities of this single component would not alter the emergency response functions of the remainder of the system, but would provide valuable information about cost-effectiveness and the long-term effects of the program on morbidity and mortality. Potential "negative" consequences from the program on the systems ability to resuscitate cardiac arrest would be minimal.
Certainly the prospective evaluations outlined here could prove challenging from an implementation perspective and a societ standpoint. It could be necessary for a community to give up its current form of EMS in an effort to provide better care to a broader range of society. No matter what type of system is involved in the research, we should not hurry into expanded service EMS at the expense of a group of patients wherein we have proven our value.
Models for the Future
Since component-based research doesnt fit well into the uncontrolled, multi-tasking environment of EMS, we need to begin to develop models specifically for systems research.4 One good thing about this type of research is that other disciplines, such as engineering, behavioral science and epidemiology, have already designed models that we may be able to modify or replicate.
Systems research is multidisciplinary. It involves the evaluation of complex, interrelated questions that contain a variety of data elements. These data are diverse, numerous, and can be difficult to obtain with a high degree of accuracy or reliability. Unlike component research, systems research involves a large number of data collectors, and the research director is often not even involved in data collection. The outcome parameters are equally diverse and sometimes not easily defined.
Currently, there are only a limited number of studies that have utilized a systems-based research model. The most well-known example is the "chain of survival" concept adopted by the American Heart Association.7 To develop this concept, researchers gathered data on a variety of EMS systems and then evaluated how the various system components fit together and affected the outcome of cardiac arrest. In this model the authors focused on numerous questions, gathered data from different locations, and used different people (not at all like the component-based model used by many "systems" researchers). This multitasking endeavor was complex and challenging but has proven valuable in educating all levels of society on how to reduce mortality from out-of-hospital cardiac arrest.
EMS research is a work in progress. There are no easy answers and no easy methodologies but nothing worthwhile is ever easy. For some issues, the window of opportunity for necessary research has closed. On others, the window is closing fast. And for some, the window has yet to be built. Our challenge is to intervene on those issues where the window is still open and carefully craft the windows of the future.
Elizabeth Criss, RN, MEd, is a free-lance writer, serves on the prehospital Care Research Forum Board of Advisers and is a senior research associate at the University of Arizona in Tucson and a base hospital coordinator at the University Medical Center in Tucson.
Reading Smart: Discovering What the Data Do and Dont Say
By Elizabeth A. Criss, RN, MEd
When a commercial says "four out of five" people agree, what does that mean? The advertiser is hoping you think it means 80 percent of all people support that particular product or idea. But couldnt it also mean something else? For instance, what if they had only asked five people for their opinions or mailed out only 10 surveys and received five responses four people for, one against. There are many other possible combinations that could produce these numbers and still not represent 80 percent of the population. Is this wrong? Its hard to say. The best response is probably that results, like beauty, are in the eyes of the beholder.
Thats all well and good for TV commercials, but this same "data torturing" can occur in medical research. Raw data generated by a project really doesnt mean anything until its analyzed, and the tools used to analyze this information and the way data is compared determine what conclusions can be drawn. That can leave a lot of room for interpretation.
Lets say youre interested in finding the latest research on the pneumatic anti-shock garment (PASG). Flipping through the journals, you find a study evaluating the effect of PASGs on nontrauma patients. The abstract states this is a prospective study done on 300 patients during a 12-month period. The findings of the study indicate that PASGs are of little value in the treatment of nontrauma patients in the prehospital environment.
Intrigued by these findings, you read the article. The results section describes the 300 patients. You note that the study divided the patients into two groups: blood pressure (BP) > 60 mmHg and BP < 60 mmHg. To assist in understanding the results, the authors include Tables 1 through 3.
Moving on to the discussion, you note the authors conclusion: "For the majority of nontraumatic patients, the PASG is not beneficial and possibly increases mortality." To support their conclusion, there is a more lengthy and detailed explanation than you found in the abstract. Looking back over the information in Tables 1 and 2, you believe this to be a reasonable conclusion.
But what about Table 3? Didnt it demonstrate that PASG use in these patients reduced mortality? It did, but the authors conclusions are still valid. Its important to note that the authors said "in the majority of "patients," not that the results applied to all patients. So, why didnt the authors make more reference to the group in Table 3?
Table 3 highlights a subgroup, patients, with a BP of < 60 mmHg that was positively affected by PASG use. Sometimes groups like this are left out due to the small number of patients in the subgroup; a small sample size does not allow the authors to calculate meaningful statistics or draw any significant conclusions. Without statistics, the most the authors can do is discuss the result as a possible trend. Nevertheless, the authors should at least mention this group as a potential area for future research. Another possibility for leaving subgroups out of a discussion is that they did not support the authors original hypothesis. Although not entirely ethical, this has been done.
The point of all this is that it is important to understand that data can be manipulated. Researchers will sometimes drop patients who dont fit the desired hypothesis or support a certain position. It is important for you, the reader, to scrutinize the literature and account for all the patients. If the authors say "majority," instead of "all," find out where the rest of the population went. Be suspicious. Ask yourself if these patients were deliberately left out, or if the sample was just too small to be meaningful.
Most of the research published today is well-controlled and scrutinized by professional review panels. However, it doesnt hurt to become critical reader and ask questions.
Elizabeth Criss, RN, MEd, serves on the Prehospital Care Research Forum Board of Advisers, is a senior research associate at the University of Arizona in Tucson and a base hospital coordinator at University medical Center in Tucson.
This article was reprinted from JEMS, March 1994.
All Study Participants
Patients with BP>60 mmHg
Patients with BP<60 mmHg