false
Catalog
2023 AES: Special Lecture | Clinical Practice Guid ...
What are the Limitations of Clinical Practice Guid ...
What are the Limitations of Clinical Practice Guidelines?
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
talk about the limitations of clinical practice guidelines, and then at the very end, touch on some alternative strategies and circle back to some of the other types of guidance that's out there that Atif was talking about at the beginning. This is my disclosure. I do some consulting for EpiWatch, but I will not discuss seizure detection devices in today's talk. The learning objectives are to identify limitations of CPGs and to discuss alternatives to clinical practice guidelines. And this is our roadmap. We'll talk about evidence or lack thereof, the real cost of guidelines, both in time and in resources. We'll talk about how guidelines are, as Dr. Shea alluded to, generalized, not individualized. And now we're recognizing there's really a lack of diverse perspectives in guideline creation. And then we'll talk about what are alternatives to guidelines. And I really think when we talk about clinical practice guidelines, we're coming down to the epistemology. What do we know, how do we know it, and how certain are we of what we know? And so there's a lot of uncertainty even about the uncertainty sometimes. So has anyone seen this article before? I love this. This is from the BMJ in 2003. And they are illustrating the limitations of requiring randomized controlled trials for all of evidence-based medicine. So this is a tongue-in-cheek article saying, okay, let's do a systematic review so we can find what's the evidence for using a parachute when we jump out of a plane. And they didn't find any randomized controlled trials. And they make the point that, okay, if you really believe in evidence-based medicine, then maybe you should be willing to participate in a randomized controlled trial to parachute or no parachute when jumping out of a plane. So there is not always a randomized trial when we are looking at evidence. One of the things that can be frustrating if we're trying to use a clinical practice guideline is that sometimes our conclusions are inconclusive. And if you've looked at a full clinical practice guideline, it can be 50, 70 pages attempting to summarize all of the available published literature about a topic. And then at the end, and I'm not gonna call out any neurology guidelines here, but at the end, you can have something like this where the question they're examining is in a patient with a specific pulmonary infection, should shorter or longer treatment be used? And at the end, they go through 70 pages of evidence and they suggest, we suggest that either a shorter or a longer treatment plan could be used. An expert consultation is obtained. And so we're kind of left to scratch our heads and say, how did that help us? Was that really worth the tens of thousands of dollars that we probably invested in this guideline? So that's not our epilepsy guideline, but it is possible that we could invest time and money and come up with a similarly inconclusive conclusion. We also wonderfully heard about the variation and the variability in how guidelines are developed and how other products and guidances are developed. And that is also true for different organizations. So AAN and AES have somewhat different approaches to guidelines and how evidence is used and what kind of evidence is considered. And so you can have two different, you can have different institutions look at the same evidence and come up with different recommendations. And here was some older guidelines for what is the appropriate treatment of new onset focal seizures in adults. In 2004, AAN and AES had a joint guideline where they said these four can be used and these four were all equally acceptable. Then in 2006, ILEE had some rankings which was not exactly the same. So different approaches can summarize the same evidence and have somewhat different conclusions. I don't know if there's any stroke specialists in the audience, but in 2018, this was published in Stroke. The American Heart Association, in combination with the American Stroke Association, put out these wonderful comprehensive guidelines, very long, very comprehensive and involved guidelines for managing patients with acute ischemic stroke. So this was very exciting. It told us everything we needed to know about stroke care. And then six months later, large portions of it were actually rescinded. And eight sections were deleted. And after these guidelines had come out, there was a lot of debate and there was a lot of disagreement within the stroke community about the appropriateness of some of those guidelines. So even with the best methodology and the best intentions, there can be some debate and some disagreement. So they ended up keeping large, many of the sections were kept, but some sections were felt to not be appropriate. So even randomized trials can have some imperfections in all of the systems that we looked at, the GRADE and some of the others. There is this predisposition to say, yes, randomized controlled trials are our highest level of evidence, although we should be looking at them and grading them down if there's problems with them. But think of our seizure patients and there is always some degree of uncertainty about seizure counts, unless you're continuously monitoring someone on EEG, I guess even then we sometimes might debate. But about seizure type and especially epilepsy type, we can misclassify people. We might not have collected all the risk factors appropriately or provoked seizures. And then there's always going to be some degree of unmeasured confounders. So depending on how certain we are about trial entry, we might have some bias or some limitations even introduced into our randomized controlled trials. And so of course a structure is only good as its foundation. So even with the best intentions and our best methodology, if we don't have 100% certainty in our evidence, we can't have 100% certainty in our guidelines. So Atif mentioned time and resources. So a systematic review, that is not a full guideline, but a systematic review with PICO development and searching the literature, two people, two experts reviewing the literature and deciding what's gonna be included or excluded and summarizing. In an ideal world, perhaps 12 to 18 months would be reasonable, probably a lot longer. A typical guideline can take three years or more or much more, depending on how the process goes, how smooth each of those steps goes. And did we mention cost? We heard, I guess we do have a little better deal than NICE, we're not in the hundreds of thousands yet for clinical practice guidelines, but they are quite expensive, quite resource intensive. And you can imagine with all these resources, AES especially, but other institutions can't take on too many at once and can't be doing too many projects at the same time. And because of the length of these guidelines, they may not reflect the most recent data. So it's quite common for if a systematic review is performed, actually they'll have to do a quick update before publication to ensure that they're including all of the studies that have come out in the 18 months since they started. And with a guideline, since the timeline is even longer, we are still, they'll try to update, but we might not be able to include something that comes out as the final draft has been prepared. And the evidence that's out there will definitely lag clinical practice. So we use a lot of leucosamide for status epilepticus. And I can tell you that the leucosamide clinical trials, or randomized trials in status epilepticus, are, there are some, but they are not, they are not, they're not keeping up to date with the clinical practice. And so that evidence is not gonna be incorporated then into a status epilepticus guideline. And there are definitely more demands and requests than resources because of that time and resource needed for each guideline. So you've heard about GRADE methodology. This is really an attempt, a really wonderful attempt to summarize all of the published evidence and then to say what, how certain are we about this and distill kind of everything that's out there, many thousands of publications into kind of a one-line recommendation. And in GRADE, this is what the evidence to decision table looks like. So once you've done your systematic review and once you've done your meta-analysis, okay, from the meta-analysis and from that systematic review, you have your certainty of evidence, that's saying how many good quality randomized controlled trials are there, or do we have great observational trials that are not biased? You can look at the magnitude of the good effects, you can look at the magnitude of the undesirable effects. Okay, that's all quantitative, that's all from the evidence. But GRADE strives to incorporate some other aspects. It strives to incorporate the uncertainty of how valuable those desirable and undesirable effects are, as well as the importance of the problem. There is a cost-effectiveness not nearly as involved as what is done in the UK. There's now qualifications for equity, for acceptability to the patients or to the people implementing it, and feasibility. However, I've split these up because a lot of these still require expert input and are still kind of a better taking expertise, not just the numbers into account. And I guess you can debate whether that's really a weakness or whether it's a strength, because that means that our jobs can't quite be replaced by AI yet. All right, so dissemination and implementation is key, because if we do these guidelines, we can spend our money and spend our time doing them, but if no one adopts them, if no one's aware of them, if they're not getting to the final provider who's on the ground, then they haven't made a difference. And EHRs are now a very common way. If you've done any clinical care, you might have had a sepsis alert pop up. That's been one of the kind of widespread and successful implementations, is the integration of EHRs, of the surviving sepsis guidelines. So even with that most successful implementation, even if we can have a successful implementation, we still need to understand what is the utility, are meaningful differences being made, and is care truly being improved? I think there's also a caution with the EHR alerts that you just see so many, that you get used to dismissing them. So there's definitely alert fatigue, so that's not the answer for every guideline. By their nature, CPGs are generalized. So ideally, they'll have a lot of good data from large studies, they'll pool the results. However, that means that the recommendations are going to be very nonspecific. It's not necessarily going to be for a 27-year-old patient who had a first nocturnal seizure, and who has this family history, and who was drinking two glasses of alcohol the night before. They're not gonna be always very specific to your particular patient. And they might not be applicable in all settings with the high and low resources. And right now, our clinical practice guidelines don't necessarily take that opportunity cost that we heard about into effect. And so there are many other unique patient situations that might not match the clinical practice guidelines. Age and patient values are really key. Sometimes what we expect, as a clinician, what we think a patient's value is might be very different from what theirs are. And life expectancy or other comorbid conditions plays in there. Some of the screening guidelines for primary care, or some of the cancer screening guidelines out there, such as annual mammogram, or such as the PSA, the prostate cancer screening. For a while, there was a concern that these were turning up these very early, very indolent precancerous lesions that probably wouldn't cause any problems. If you identify it in someone who's 80, and who might have other conditions, and might be expected to die of something else before that cancer would ever make itself known, there's this big question about is that, is it really appropriate to screen people, and what is the utility of treating those? So those are not always taken into account. And then patient values can be so different. They might have had, a patient might have had a bad experience with a prior medication, they just are not willing to start a medication. Or, I'm thinking of one of my patients now, he had a terrible experience with a different type of surgery, but he's just not at all willing to consider epilepsy surgery. So sometimes patients have those conditions, and we're kind of saying, okay, I'm not really following the best guideline, but what should I do? And as with any evidence or a literature-based search, there is a typical publication bias wherein that positive results are more likely to be published, and negative results are less, although there are ways of assessing for that. Patient perspectives, we touched on a little bit, and earlier speakers have touched on it, but delving even more deeply into that, patients might have reproductive goals, maybe achieving pregnancy or having another situation is their primary goal, even more so than some treatment. Patients' caretaking roles might prevent them from getting certain types of treatment, or from getting a surgery, and other health concerns, other conditions. And then really, diverse perspectives and inclusion and equity is ignored. Pregnancy and other health conditions are often excluded from clinical trials, although there's now a movement at the FDA to perhaps see how that can be incorporated. And so that's a big concern, is that if we're basing everything from clinical trials, we're not capturing our pregnant patients or our other comorbid conditions. And then patient preferences also might be keeping them from entering clinical trials. We've seen at some of the groups focusing on disparities, we've seen that there is really an underrepresentation of minoritized populations in randomized clinical trials. But there's so many reasons for that. One is a historic lack of informed consent in minoritized populations. In treatment studies, you've probably heard of the Tuskegee syphilis natural history study, where black men with syphilis were left untreated without their knowledge or consent to see what happened. This is a picture of Henrietta Lacks, who was a black woman in Baltimore in the 50s. She had cervical cancer and she got treatment at Johns Hopkins. Without her knowledge or consent, her cervical cancer cells were collected as part of clinical care, but were then cultured in the lab and were found to have this wonderful, amazing property of being very hardy. And they are now cultured as HeLa cells and are used in labs all over the world. She didn't know this, her family didn't know this. And when they found out, when she ultimately passed away and when her family found out that her cells were still growing, this was extremely, extremely distressing. And there's been much talk about what consent is needed. But because of those historical injustices, there is really, there's really perhaps a reluctance to participate in trials or in studies or experiments. So our demographics don't reflect our patient demographics necessarily. Okay, so we talked about a lot of limitations. What should we be doing instead? And because of the nature of the evidence, not all questions lend themselves to a clinical practice guideline. There's a lot of questions that can be addressed in other ways. So the expert consensus or expert opinion, it's not just good old boys sitting around a table. There's a systematic iterative process called the Delphi process. Here's the city of Delphi in Greece, named after the Oracle of Delphi. This was actually not developed in medicine. This was developed by the military in the 50s and 60s during the Cold War because they wanted experts from the RAND Corporation to sit around and kind of forecast what they thought security concerns were gonna be. So they developed this iterative process of getting experts in the room, seeing where they agreed, seeing where they disagreed, and then successively coming to an agreement and saying, okay, what can we agree on? But it's now been adapted for medicine and it's actually used in many spheres in medicine. It's a way of achieving expert consensus in a rigorous way. There's also reviews and putting things into context. These can be systematic. As we heard, that needs a PROSPERO registration. It needs a very specific PICO question and there's a very strict process to ensure that it's a high quality systematic review. There's also many scales for grading the quality of the systematic review. So there's an investment in time and resources there. Other types of reviews that are shorter and less structured are narrative reviews or scoping reviews. And it's possible that some questions that come up as potential clinical guidelines are perhaps better suited to a scoping or narrative review first to see what evidence is out there. And then perspectives and policy guidance as opposed to a full clinical practice guideline. Those are alternatives. And right now, the AES is considering, should our guidelines committee, should our guidelines workflow expand so that we can include other things to address a broader variety of questions? Should we also be doing rapid updates perhaps to existing systematic reviews and guidelines? Should we be in the business of producing expert consensus statements through Adelphi or other very structured process? Should we have member polls? And should we put out guidance or advisories that are not full clinical practice guidelines? And so already, the Practice Management Committee and the Treatments Committee of the Council on Clinical Affairs puts out statements and some guidance and advisories. So we're figuring out, okay, what member needs are there besides full clinical practice guidelines that we've heard about? And I wanna say a big welcome to the incoming GAC members. I know there's five people joining our committee this year as people rotate off. So thank you all for coming. So our impact on clinical care. Clinical practice guidelines are a base, as we heard, but not the only criteria. We should always tailor our treatment to a patient's unique situation. So thank you very much all for your attention and for coming.
Video Summary
Clinical practice guidelines (CPGs) have limitations including a lack of individualized care and diverse perspectives. They're resource-intensive and may not reflect recent data. Randomized controlled trials, considered the highest level of evidence, have limitations like potential biases and publication bias favoring positive results. Guidelines often generalize recommendations and may not address specific patient needs, such as age, comorbidities, or unique patient values. Alternatives like expert consensus (using the Delphi process), systematic reviews, or narrative reviews could address broader questions not suitable for CPGs. Other alternatives considered include rapid updates to guidelines, expert consensus statements, and member polls. Effective dissemination and implementation are crucial as guidelines must reach frontline providers to be impactful. However, adoption remains a challenge, with electronic health records showing mixed success. Ultimately, CPGs are a base, but clinical care must be tailored to individual patient contexts and needs.
Asset Subtitle
Presenter: Emily Johnson, MD, FAES
Keywords
Clinical practice guidelines
Individualized care
Randomized controlled trials
Expert consensus
Implementation challenges
×
Please select your language
1
English