Episode 58: 2025 AACE Dyslipidemia Guideline Methodology

In this episode, join moderator Dr. David Lieb alongside esteemed panelists Dr. Shahnaz Sultan, Dr. Carol Peng, and Dr. Melanie Bird as they delve into the methodology behind the 2025 AACE Clinical Practice Guideline on Pharmacologic Management of Adults with Dyslipidemia.

Discover insights into:

  • Why AACE adopted the GRADE framework and its implications for guideline development.
  • The critical role of systematic reviews in creating evidence-based recommendations.
  • How evidence is assessed using the GRADE approach.
  • Practical takeaways for clinicians when implementing GRADE-informed guidelines in patient care.

Whether you’re a seasoned practitioner or new to the field, this discussion offers valuable perspectives on the evolution of evidence-based guideline development and its impact on clinical practice.

Click here to view the transcript

February 5, 2025


Speaker 1:

Welcome to AACE Podcasts. Thanks for tuning in as we elevate clinical endocrinology by taking deep dives into trends and topics that can help us improve our patient care and global health. Find the latest episodes on aace.com/podcasts. And now let's meet the endocrine experts who will be talking with us today.

David C. Lieb, MD, FACE, FACP:

Hello, and welcome to our AACE podcast. I am Dr. David Lieb, professor of Medicine in the division of Endocrinology at Eastern Virginia Medical School at Old Dominion University in Norfolk, Virginia. I also serve as our Endocrinology Fellowship Program director. I was an author of the 2023 protocol for the development of AACE clinical practice guidelines and consensus statements, and have been involved in work involving enhancing the trustworthiness of clinical practice guidelines.

Joining me today are Dr. Carol Peng, Dr. Shahnaz Sultan, and Dr. Melanie Bird. We'll be taking a peek behind the scenes and discuss the methodology used to develop the 2025 AACE Clinical Practice Guideline on the Pharmacologic management of adults with Dyslipidemia. Thank you all for joining us today. Dr. Peng, could you introduce yourself and tell us a little bit about your area of expertise and your role on the guideline?

Carol Chiung-Hui Peng, MD:

Yeah, hi everyone. I'm actually finally a board certified endocrinologist because I just completed my fellowship at Boston University last year, 2023, and I'm now practicing in Taiwan, but I'm still able to collaborate with everyone to perform this guideline development. I was very honored to be selected as the first ever methodology fellow for the AACE practice guidelines task force during my second year of fellowship. I actually applied with very minimal expectations because the position was open to both early career attendings and fellows. When I received the invitation for a Zoom interview on a Friday afternoon, kind of like 5:00 P.M. I actually thought it might be a scam, but ultimately I think my focus on systematic reviews and meta-analysis made me a strong candidate because my third original article published in 2018 was a meta-analysis, and since then I have been leading or co-authored in six meta-analysis and one scoping, sorry, one scoping review project.

David C. Lieb, MD, FACE, FACP:

Awesome. Very cool. And congratulations. Dr. Sultan?

Shahnaz Sultan, MD, MHSc, AGAF:

Yes, thank you. It's a pleasure to be here. So I am a professor of medicine in the division of Gastroenterology, Hepatology and Nutrition. I'm also the vice chair for DEI in the Department of Medicine. And in a former life I used to be a program director of our GI fellowship program, and I started out my career as a health services researcher and focused on colorectal cancer. But I spend most of my time currently very much in the guideline world. I was the former chair of the American Gastro Association Clinical Guidelines Committee and a long-standing member of the GRADE Working Group and the US GRADE Network. And I focus a lot on trying to teach folks about how to use GRADE as well as advance some of the methodology behind the GRADE framework.

David C. Lieb, MD, FACE, FACP:

Awesome. It's great to be surrounded by so many medical educators here, especially with regards to guideline development. And Dr. Bird?

Melanie D. Bird, PhD, MSAM:

Thank you, Dr. Lieb. I'm so excited to be a part of this. So I am the methodologist here on staff for the AACE's Clinical Practice Guideline program. I have over 10 years experience in developing clinical guidelines, consensus statements, health policies, etc. Prior to that I was a biomedical researcher in immunology as well, but basically research, data analysis, evidence assessment is my passion. And so I was really excited to be part of AACE's first guideline using GRADE and helping implement that program here. And also very excited to talk about it today.

David C. Lieb, MD, FACE, FACP:

Thank you. This is a super powerful team and I'm excited to hear everybody's thoughts about this guideline in particular. So Dr. Sultan, this is the first guideline from AACE that's used the GRADE framework. Can you give us a short background on GRADE and why we use it?

Shahnaz Sultan, MD, MHSc, AGAF:

Yeah, so GRADE is actually an acronym. It is short for grading of recommendations, assessment, development, and evaluation. And GRADE really began as a collaboration of experts dating back to about the year 2000 where many researchers, methodologists, statisticians, health specialists, clinicians got together from a number of different specialty areas as well as from international organizations to come together to really develop a transparent approach to grading the quality or certainty of evidence and the strength of recommendations primarily focused on healthcare. The framework now is actually considered the standard for guideline development and it's been adopted by over 200 plus organizations. The larger GRADE working group still remains as a very informal group of individuals. They now have about over a thousand members worldwide and there are currently about 25 to 30 GRADE centers or GRADE networks internationally. And there's always ongoing work to continue to advance the methods behind guideline development.

David C. Lieb, MD, FACE, FACP:

And Dr. Bird, can you tell us why AACE decided to adopt GRADE and what that means for our guideline program?

Melanie D. Bird, PhD, MSAM:

Absolutely. So back in 2022, AACE adopted a new strategic plan with the goals to increase awareness and use of their guidelines, publications, and other educational resources by the endocrine community worldwide. As part of that adopted new policies and processes to enhance the guideline development program, so including adoption of GRADE, which helps us standardize across different organizations. As Dr. Sultan mentioned, since it's been adopted so widely, this sort of helps us align better with other guideline development groups, other organizations globally. AACE also added updates to their CUI policy and the process for forming task forces. And this was with a goal to enhance diversity, equity and inclusion as part of that process. And so really we're looking at basically raising the game, right? We want to enhance the guideline program, we want to adopt best practices for guideline development so that we can be a major player in that field and in that area.

And to that end, we also joined the GRADE working group as well as the Guidelines International Network. And this is really to provide opportunities for AACE members to participate in guideline development for all of us to learn those best practices and for understanding how to best implement and strategize on really increasing and enhancing our guideline development program. And this is really just about providing trustworthy guidance that is patient centered and facilitate shared decision-making, which is foundational to AACE's main mission of improving global health.

David C. Lieb, MD, FACE, FACP:

I think the trustworthiness piece is very important and talking about changes with respect to the conflict of interest policies is key. And of course, making something very patient centered is important too. Dr. Peng, you are our first methodology fellow, which is very exciting for the organization and for our guidelines moving forward. Can you tell us about your experience in helping to develop the guideline and what were some of the key steps for you in the process?

Carol Chiung-Hui Peng, MD:

Sure. I actually feel very privileged to be part of this project because this is the first ever and I'm so lucky to be one of the team because at my level, just a fellow, I couldn't have the chance to participate guideline. I was the one reading the guidelines and trying to memorize all the details of the guidelines. I have to say my experience as the methodology fellow has been overwhelmingly positive. The team has been incredibly supportive. Even I had to relocate to Taiwan and prepare for my US endocrinology board. During the critical stage of the guideline project, I was able to make time for it and still stay on top of my task despite a demanding schedule. And that is from the teamwork, especially Dr. Bird, her effort in planning ahead and setting clear goals and project stages. Initially I was kind of intimidated because I have never used GRADE before.

What I had in the past is my research experience in systematic reviews and I didn't know how I can contribute. But Dr. Bird and Dr. Sultan and everyone kind of guide me through every step of the way so that I can know how GRADE works and then contribute my expertise in systematic review and meta-analysis. I have to say the key step I contribute was the systematic review process, which I helped fine-tuning the PICO questions. PICO actually stand for patient population and I stand for intervention and C stand for comparison, O stand for outcomes. So Dr. Bird mentioned about the patient-centered outcomes. We actually formulated the questions so that we can develop a patient-centered outcomes questions to help develop this guideline. And because of my background in endocrinology and also a little bit of background in methodology, I was able to bridge the gap between the task force and the methodology team.

And also I had experience in setting up the search strategy with librarians with my previous project. So I was able to work together with a librarian kind of craft the most appropriate doable search strategy because we don't want too many or too few search results. We eventually had 4,000 abstracts to go through, which is a tremendous task for everyone, and we need two independent reviewers to go through the process and I'm one of them. And so I think working with the librarian and the whole team are using the Providence platform simultaneously, we can work together and go through the screening process is the very critical stage of the whole guideline development.

David C. Lieb, MD, FACE, FACP:

Dr. Sultan, what's unique to GRADE compared to the other frameworks that are out there that organizations might use?

Shahnaz Sultan, MD, MHSc, AGAF:

That's a good question. So before GRADE, there were so many different varying systems or frameworks out there for guideline development and often every organization or society would develop their own framework. And what it led to was a lot of confusion among end users, the clinicians, the patients who are trying to understand how to utilize these recommendations to improve or provide the best care. And so, one of the main goals of the GRADE working group was to have one unifying kind of framework that could be used across societies, across organizations, across other guideline producing countries so that there was at least some uniformity in how to apply and understand recommendations. So the basic foundation to the GRADE framework is that there's transparency and there's explicit criteria used to really understand the evidence. And so there's an emphasis on this methodologic rigor. I would say the biggest difference in GRADE is that it's not necessarily study-centric or study-design-centric, but rather it's outcome-centric and GRADE places a high value on really understanding what are the patient important outcomes for decision-making.

And there's a lot of time spent on trying to understand those outcomes and identifying those outcomes, understanding what are important thresholds for each of those outcomes to help with decision-making. There's a big emphasis on the totality of evidence. So as Dr. Peng alluded to, at the basis of every PICO question is a systematic review of the evidence. And so looking at that totality of evidence is really, really important. And finally, making judgments about how confident you are of the effects that are coming from that body of evidence, with explicit criteria for making judgments around criteria that might help you reduce your confidence or actually increase your confidence. And finally, taking all of that work around the evidence review and coming out with a recommendation. There's a, again, very systematic approach looking at the trade-offs, the benefits and the downsides, the desirable effects and the undesirable effects, as well as patient values and preferences and other criteria that we call part of the evidence to decision framework around feasibility, acceptability, cost and resource use, as well as equity implications.

David C. Lieb, MD, FACE, FACP:

One of the things that I think is also important with respect to guidelines is timeliness. How quickly can they come out as the evidence is changing over time? And sometimes guidelines come out after a long period of time and they're kind of bloated. There's guideline bloat with just way too much information. Does the GRADE methodology and the format and structure and framework allow for more rapid guideline development and for more focused guideline development?

Shahnaz Sultan, MD, MHSc, AGAF:

I think you're highlighting an important limitation that often comes up with guidelines that in especially rapidly advancing fields, sometimes by the time the guideline comes out, it's already perhaps outdated. I think within GRADE, if you have a really well done systematic review that you can appraise the quality of and utilize or update, that actually significantly decreases the amount of time and effort that it might take to actually develop a recommendation for that specific question. But I think there's a lot more work being done around these newer techniques for developing living systematic reviews or living guidelines, also incorporating AI, artificial intelligence in terms of helping to decrease the steps it takes for the systematic review or the evidence synthesis. So I think these are all challenges that we're recognizing and trying to address.

David C. Lieb, MD, FACE, FACP:

I was definitely going to ask about artificial intelligence and its use in developing guidelines. So that might be a topic for another podcast hopefully in the future. Dr. Bird, how is evidence assessed using GRADE?

Melanie D. Bird, PhD, MSAM:

So as Dr. Sultan sort of outlined initially, so there's definite domains and these frameworks that we walk through that help us really determine our confidence and the estimate of effect. So for example, how well does an intervention work? How much does it decrease undesirable effects, the mortality, the risk of stroke, the risk of heart attack, those sorts of things, and how confident are we in that difference? So for parts of that, there's multiple domains. So following the systematic review that we do, and for this guideline in particular, we were very fortunate and found some published systematic reviews that were of high enough quality that we could basically use those directly or just update with some newer studies that had come out. So that actually did save us quite a bit of time and headache so that we didn't have to do all of them from scratch because for this guideline we did around seven systematic reviews.

So either finding the published reviews and updating them or thoroughly going through them or actually doing our own, and then creating our own meta-analysis as well to find those point estimates for the different outcomes. So for GRADE, that uses multiple domains and each time it is outcome specific. So for each outcome that was prioritized by the task force, we then looked at the evidence and if we were able to, we did a pooled estimate of effect. So again, how much it increased or decreased that outcome based on the trials. And just want to highlight that they really are inpatient important outcomes. So for this guideline, we really prioritize things that would be important to patients. So those would be things like mortality, heart attack, risk of stroke, surgical procedures, amputation, things like that. For other guidelines that may be quality of life, it could be lots of things, but we're really trying to avoid reliance on what we call disease-oriented or intermediate outcomes.

So lab values, right? Because for most patients, they're more concerned about whether or not they'll have a heart attack over maybe what their LDL cholesterol level is. While they correlate a lot of times, that isn't something that patients are overly concerned with. They really want to know what are those downstream effects that are important to me and my quality of life. So that's what we're looking at, is those individual outcomes. So we also try to avoid composite outcomes. So we're not necessarily looking at MACE or major adverse cardiovascular events. We're going to separate those out. And I think that is going to be one of the things that is going to be newer for a lot of AACE members reading these guidelines, is that they're going to say the task force weighed these outcomes individually and they didn't rely on this bigger outcome that rolled up all those individual outcomes together.

So we're really taking a lot of care to break that down and to really weigh the different benefits and harms for each step. So what we're looking at is whether or not when we have those results, how confident are we that point estimate is true? So does it lie? Is a true effect actually what we're seeing? Is it benefit? Is it harm? Is there no effect? And is that estimate of effect adequate to support clinical decision making? So we really want to think about things that are clinically important, not statistically important. So even though it's really difficult for all of us that relied so heavily on P-values, we throw all of those out the window and we're really looking at what is that clinically important threshold and did the intervention cross that? Did the effect reach that intervention or did it not? So with that, we're looking at a bunch of different domains, and this is where it gets a little bit very complicated.

So I'll try to keep it high level, but really we're working on two different, while we're not study design centric, we sort of do start out based on study design, so that randomized control trials start out at high certainty and then non-randomized or observational studies start out at low. So for this guideline, we kind of prioritized RCTs because we're talking about a lot of different medications. So we sort of prioritize looking at RCTs first. And when we see those RCTs, we're going to look to see do we have concerns about risk of bias? Were the studies done in a way that controlled for bias? So were they blinded appropriately? Was allocation done effectively? Blinding? Did they report all the outcomes? All those sorts of things. Was the trial stopped early? Do we feel like the studies was done well enough to control that we didn't have bias introduced?

We then are going to look at consistency across studies. So for many of the medications, we had several trials, 10, 11, 12. So was there consistency across that or were the results from all the studies really variable and all over the place? We look at directness. How close to the PICO question were the elements of those trials? So did it have this population of interest? Did it have the intervention that we were interested in, the comparison, and did it report the outcomes that we were interested in? And then we looked at precision. So we want to know how precise is it? And in this case, we're looking at when we see those estimates of effect, okay, so when we see those study results, again, are they consistent? Are they kind of all aligned? What about our confidence intervals? Are there huge, wide confidence intervals that span sort of that threshold for decision-making?

Do they include both benefit and harm? And if they do, then that's going to lower our confidence in the estimate of effect. So quick example. For some of the interventions we are looking at, we had a threshold of say we want to see a decrease of five strokes per a thousand individuals because we're concerned to serve those absolute risks. And some of the results when we looked at it, while the summary estimate may have been four or five number of strokes decreased, the confidence interval went from 10 fewer strokes up to two more strokes. So that's a lot of uncertainty there. That's not very precise. And so we want to be really careful if we're going to potentially make a recommendation for an intervention, are we actually decreasing, so the desirable effect related to stroke, or would we maybe cause harm? And if we don't know or we're uncertain about that, we're going to rate down.

So things that may have started out as high certainty may end up at moderate or low certainty of evidence or even very low depending on how much information we have. So this one we're saying this is again for each outcome. So we would say, okay, here's the outcome for stroke. Here's the outcome for mentality. What is our certainty evidence? And we're going to look at all of those and make a decision. The final thing for RCTs is publication bias. And this is one where we're really thinking about are we missing studies? So for those people who have been publishing for a long time, we know that studies with positive results get published faster more often than studies with negative results. It's just the way it is. So we want to know are we seeing skewed results because we're only seeing the small positive trials that may be industry funded and the bigger trials or the trials where there were negative results didn't get published and we're missing those.

So we want to just be careful about that and think through, do we feel like we have that totality of evidence or are we really missing key information? So flipping over to non-randomized studies, and these are observational studies, they're going to start out at low, but that certainty can be raised if we see a few things. If we see a really large effect, we may feel more confident. Okay, this intervention does something. If we see a dose response occurring, then we're like, okay, there again, maybe we have more confidence in the effects. We are actually seeing an increase with the increased dose or vice versa. And then there's the favorite domain for a lot of us because it's such one that you really have to think through and it can be a bit of a sticky wicket, is what we call residual confounders. So if we're having observational studies and when we think about observational studies or non-randomized, we worry about the introduction of confounding variables, right?

Something that's going to change things or mask the true effect. And for this domain, if we still see an effect despite the presence of the potential presence of confounders making it lower or maybe masking that effect, then we're going to feel a little bit more comfortable that it's true. That you know what, even with all of these confounders lowering maybe what the true effect is, we still see an effect. So it is probably okay and our confidence might be a little higher. For this guideline, we didn't have as much in the way of looking at non-randomized studies just by the nature of the questions. So we didn't cover this domain a lot, but in the public health literature in particular, these domains are things that they spend a lot of time thinking about. And there's great guidance on it from the GRADE working group and others on sort of how to think through all of that in a very systematic way.

So to say all of this, we come through, we've done all of this work, we then look across our outcomes that have been rated either critical, important or not important for clinical decision-making. And within those critical outcomes we say what is the lowest certainty of evidence? And that's what we take as our overall certainty for that question. And then that's what gets fed into the evidence decision framework that then helps us start thinking about the recommendation and the direction and the strength of that recommendation.

David C. Lieb, MD, FACE, FACP:

And that is a beautiful segue into my next question. Dr. Peng, tell us about the evidence to decision framework that's part of GRADE. I think that's something that's novel about the GRADE methodology.

Carol Chiung-Hui Peng, MD:

Yeah. After Dr. Bird giving the very brief overview of the very comprehensive evidence generation process, we have to discuss within the task force why we made this decision, why we rate down or rate up. And we have a lot of discussion within our members and we kind of need to let our readers, well, kind of users know why we made this decision. We need to give them the rationale. In the past, we don't really give much of rationales why guideline gave us this recommendation. And this evidence to decision framework cannot tell us based on the evidence we generated, we think we need to take something else into consideration because we should not just base on clinical data or numbers. We want to be patient focused. So as Dr. Sultan already mentioned that we need to take visibility, accessibility, or equity or patient preference into consideration in the medication we kind of go into, it's very, very novel and very expensive medication, but are they cost-effective?

They may be generating very trivial small benefits, but they are super expensive. Would they be suitable for every patient? So we need to give the rationale how are we going to apply the recommendations because clinicians practice in different areas with different resources and not everyone can access the medication or the test. We need to send out the labs and that may not be feasible for someone in a rural area. And so the evidence to decision framework, it kind of goes through many domains. If I have to go through the domains, it maybe too many, but I would just take one, for example, I said about the cost-effectiveness and also I mentioned about the equity because this is something that we always need to know. Is there any disparity that equity could possibly be reduced?

Shahnaz Sultan, MD, MHSc, AGAF:

I'm happy to add to Dr. Peng's explanation so far. So Dr. Peng, you did a good job in kind of summarizing the fact that it's not just the overall certainty of the evidence or not just the evidence review that then we can directly use to make the recommendation, but that we really have to think about the overall totality evidence and the trade-offs. So when you're talking to a patient and you're telling them to start a new medication, you really want to inform them about the potential benefits they could get, but also the potential downsides related to harms. And so having that trade-off in mind is really, really important to think about when you're developing the recommendation. So there's a lot of emphasis on thinking about what are the overall benefits or desirable effects, what are the overall undesirable effects or downsides or harms? And really thinking about what that trade-off really is.

And then you alluded very nicely to some of the other really, really important domains that we need to think about, which are around patient values and preferences. Different patients may value different outcomes variably, and that might inform their decision to follow or not follow through a recommendation. So for patients that maybe place a high value on reducing strokes, they might make a different choice than those individuals that place a higher value at reducing the burden of taking a medication and the potential downsides or harms that might occur from that medication. So it's really important to think about patient values and preferences.

And then lastly, you talked about feasibility. Is this intervention feasible to implement? What are the cost implications or resource implications? How acceptable is the intervention? And really importantly, you highlighted equity. I think for a long time we've perhaps not explicitly thought about equity implications. And within the framework, it really kind of forces the panel to think about are we going to further inequities or are we going to actually acknowledge in the context of what this recommendation is doing, how will we make sure that every patient has the opportunity to actually benefit in terms of the potential benefits of our recommendation? So that's an important explicit consideration.

David C. Lieb, MD, FACE, FACP:

I think, Dr. Sultan, you've kind of gotten it. One of my final questions was what clinicians need to know about GRADE when implementing guideline recommendations into their clinical practice, which is what this is all about. And I actually am interested with your experiences, how has being involved in clinical practice guideline development changed you in your practice as a practicing physician?

Shahnaz Sultan, MD, MHSc, AGAF:

Oh, another good question. I think at the end of the day when we as clinicians are caring for our patients, what we really need to know is do I recommend something to a patient? Do I not recommend something to a patient? So this is getting at kind of the direction of a recommendation statement. Are you going to make a recommendation for something or a recommendation against something for a patient? And then getting at the strength of that recommendation. So within GRADE, there's explicit language that's used to articulate the strength of the recommendation and the direction of the recommendation. So strong recommendations versus conditional or weak recommendations. And the reason that's important is because at the end of the day when you're seeing a patient, you really have to give them some opportunity to come to a conclusion one way or the other. And for strong recommendations, perhaps there's less discussion with the patient.

Because what it implies is that there's clearly a trade-off of really big, big benefits and maybe small downsides. And for the majority of patients, this is what you should be doing. This is really the right thing to do. So for example, to borrow from Nike, and we use this example all the time when we're teaching, is just do it. You're not going to have a long conversation with a patient about, "Well, these are the pros, these are the cons. What do you think should be the best way forward? I want to understand what your values and preferences are." So strong recommendations often, just do it. But conditional recommendations, which is really the majority of decisions that we have to make and really how we often practice with our patients where we're really trying to share with the patients, "These are the potential benefits, these are the potential downsides. What do you think would be the best way forward? What do you think about doing this or not doing this?"

And that is what a conditional recommendation really encourages, that shared decision-making approach, highlighting what we know are the potential benefits, what we know are the potential downsides, and then coming to a decision that aligns with what the patient really wishes to do. How has that changed what I do? I think I spend a lot more time actually having a more informed shared decision-making conversation. So one of the nice things about these evidence profiles that we call them, which are like the summary of the treatment effects, let's say, across all these different outcomes, at the end, using this framework, you can actually tell a patient, "If you take a medication for at least five to 10 years, your potential risk of having a stroke is five fewer per thousand. And the likelihood of you having a potential side effect is this many more adverse effects per thousand."

So it really allows you to have a more explicit conversation with the patient to understand what are those benefits and downsides. I think in the past, I used to speak more vaguely. "I think if you take this medication, data suggests that perhaps you'll have less cardiovascular events and perhaps you'll have this many fewer chance of you dying over the next few years. It's lower." But now I actually utilize a lot of that evidence that we have to develop the recommendation directly with my encounters with my patients.

David C. Lieb, MD, FACE, FACP:

I love that you brought up shared decision-making. I think that's a hot word or hot phrase in medicine right now, but it's incredibly important because it's what we do every day in clinic and in the hospital. And I think you explained very nicely how your experience as part of being involved in guideline development has shaped how you approach shared decision-making. And I had a very similar experience when I was involved in a clinical practice guideline in the past, and I wanted to ask Dr. Bird, you mentioned that there's a team that gets together and ranks outcomes and makes a lot of these decisions, determines what the PICO questions are going to be that Dr. Peng mentioned. How can AACE members get involved in the clinical practice guideline process? And I should say that I was the co-chair for the Empanelment task force for this guideline, specifically with my co-chair, Dr. Shailendra Patel. So I know a little bit about what that process is like, but I wanted to make sure that you had an opportunity to talk about that too.

Melanie D. Bird, PhD, MSAM:

Yeah, absolutely. This is something, first and foremost, read the guideline, sort of look through it, get a feel a sense of what it looks like. I will say one of the best things as a researcher and someone who basically job is to apprise and assess other organization's guidelines, the way that AACE is doing it now is extremely transparent, which increases the trustworthiness. You can actually look at all the judgments the task force made, that discussion, and you can really understand why the recommendation is the way it is, why it's conditional or strong, and the task force is really great in really spending a lot of time and being very thoughtful in that process. So obviously read the guideline, understand the process, and then when AACE is ready to do the next one or any really guidance document, the new process is we do an open call for authors.

So this means anyone who is an AACE member can submit an application, fill out a conflict of interest disclosure form and apply to be on the task force. We then have an empanelment work group that reviews those applications. They're really looking for that balance of expertise years in practice and then DE&I. So where are they practicing? What is their patient population? All those sorts of things. What is their passion towards patient's, preferences, values, DEI, all those sorts of things. Then that work group forms a slate that then goes through several levels of approval before the task force is finalized. But it will, we had such success and has such a wonderful task force with just really great opinions. It was multidisciplinary. That's what we're really striving. So we had obviously our endocrinologists, but we also included a clinical pharmacist. We also included a family physician to get that primary care perspective as part of the task force, which I think is really, really important.

When organizations make guidelines, they want them adopted widely. But to do that, you have to have people at the table, right? You have to include them in the conversation if you want them to understand and adopt the guidance. The final thing that AACE is doing as part of that getting everyone at the table for this guideline, we actually incorporated patient review of the recommendations. We checked with them and said, "Did we get the right outcomes? Are these outcomes important to you? Did we do that correctly? Do the recommendations make sense? Looking at some shared decision making tools that we sort of incorporated, does that help you understand what these different medications do and what the potential harms are?" And that was really, really exciting. And we had wonderful patient representatives from WomenHeart that looked over everything very thoroughly and gave wonderful feedback on it.

So that's also important too, is making sure that all stakeholders, all people that might be impacted by these recommendations are really included in that process because there again, that's going to enhance the trustworthiness of these guidelines. So I think that's just really, really exciting from start to finish. I think AACE has been so invested obviously by bringing in amazing experts like Dr. Sultan, bringing in medical librarian, prioritizing methodology fellows like Dr. Peng who practice in completely different settings to bring that experience. And so I think overall it's just been a pleasure to work on this guideline.

David C. Lieb, MD, FACE, FACP:

And Dr. Peng has laid the foundation, I think, for the role of the methodology fellow. I know that there are people listening to this podcast who are fellows or junior faculty that are going to be interested in that position for future guidelines. Dr. Bird, what's that process look like?

Melanie D. Bird, PhD, MSAM:

It's the same open call and individuals apply. And while we love to have the younger generation, our fellows or whatever, we understand that they can be extremely busy as well. So this is actually open to any career stage. Anyone who wants to learn more about guideline development can apply to be a methodology fellow and what the benefits are for that, whether it's a guideline or a consensus statement, is that, and obviously I'm going to hand it to Dr. Peng in just a second to give her a viewpoint, but there's extra benefit. You get to have more training, more understanding, ask those questions. You get to interact with experts in the field and just gain a broader understanding of whether it's systematic reviews or narrative reviews, literature searches, all those sorts of things. Those are the real benefit for participating as a methodology fellow. And Dr. Peng, please, I would love to hear your thoughts about the whole program.

Carol Chiung-Hui Peng, MD:

I have been highly recommending this methodology fellow position, if there are any openings to anyone that is a AACE member, because I tell them, you just become a member and then you'll receive the emails, I don't know how often, once a week, and then they'll kind of call out the process for getting the methodology fellow or authors that Dr. Bird mentioned. So I always pay attention to those information in the email. They always in the bottom the email. I'm not sure why, but I always scroll through, scroll down to the bottom and then look at the deadline for anything, and I apply to the position I'm interested in, and that's how I get selected. It doesn't take too much time to prepare for the application document. You basically just need to submit your CV, which you have handy in your hands most of the time. And then I forgot what I need to submit it to, but it probably didn't take me more than 30 minutes to do the submission process. So it's like a low hanging fruit for the fellow to get more experience if they are interested in becoming a methodology fellow.

David C. Lieb, MD, FACE, FACP:

Excellent. Thank you. And I want to thank all of you. Thank you, Dr. Peng, Dr. Sultan, and Dr. Bird for joining me for this podcast. And thank you to everybody who's listening. The work we've discussed today, I think, underscores the rigor and the transparency and the patient-centered focus that AACE strives to bring to its guidelines. And by adopting the GRADE methodology, AACE is enhancing the trustworthiness of its guideline process, helping clinical endocrinologists to make informed decisions that ultimately lead to better patient care. To read the full guideline and to view our other podcast on the guideline recommendations and key updates, visit pro.aace.com/clinicalguidance. Thank you.

Speaker 1:

Thanks for listening to another great AACE podcast. Join us for another episode at aace.com/podcasts and help us in our mission to elevate clinical endocrinology. Together we are AACE.

VIEW ALL PODCASTS