Critical Appraisal of Clinical Research – Free full-text /PMC5483707/ – J Clin Diagn Res. May 2017
This article is a primer on how to read research studies: what to look for and what to question.
Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care.
It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values.
The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.
Introduction
Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making
Critical appraisal is essential to:
- Combat information overload;
- Identify papers that are clinically relevant;
- Continuing Professional Development (CPD).
Carrying out Critical Appraisal:
Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.
Standard Common Questions:
- What is the research question?
- What is the study type (design)?
- Selection issues.
- What are the outcome factors and how are they measured?
- What are the study factors and how are they measured?
- What important potential confounders are considered?
- What is the statistical method used in the study?
- Statistical results.
- What conclusions did the authors reach about the research question?
- Are ethical issues considered?
The Critical Appraisal starts by double checking the following main sections:
I. Overview of the paper:
- The publishing journal and the year
- The article title: Does it state key trial objectives?
- The author (s) and their institution (s)
II. Abstract:
Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.
- Aim of the study: It should be well and clearly written.
- Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
- Results: The measured variables with their statistical analysis and significance.
- Conclusion: It must clearly answer the question of interest.
III. Introduction/Background section:
An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged
- Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’
These questions should be asked before the research is conducted and paid for by tax-payers. Why do we need yet another study about the evils of opioids?
- What has been already achieved and how does this study be at variance?
- Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?
IV. Methods and Materials section:
Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated
V. Results section:
This section should clearly reveal what actually occurs to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.
VI. Discussion section:
This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established.
A discussion on a possible related limitations and necessitation for further studies should also be indicated.
Even here, they promote the practice of always hedging bets by saying “more research is needed”.
Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).
- Does it address any source of potential bias?
- Are interpretations consistent with the results?
- How are null findings interpreted?
- Does it mention how do the findings of this study relate to previous work in the area?
- Can they be generalized (external validity)?
- Does it mention their clinical implications/applicability?
- What are the results/outcomes/findings applicable to and will they affect a clinical practice?
- Does the conclusion answer the study question?
Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.
1-What is the research question?
For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results.
There are five broad categories of clinical questions, as shown in [Table/Fig-1]:
Categories of clinical questions and the related study designs.
Clinical Questions Clinical Relevance and Suggested Best Method of Investigation Aetiology/Causation What caused the disorder and how is this related to the development of illness.
Example: randomized controlled trial – case-control study- cohort study.Therapy Which treatments do more good than harm compared with an alternative treatment?
Example: randomized control trial, systematic review, meta- analysis.Prognosis What is the likely course of a patient’s illness?
What is the balance of the risks and benefits of a treatment?
Example: cohort study, longitudinal survey.Diagnosis How valid and reliable is a diagnostic test?
What does the test tell the doctor?
Example: cohort study, case -control studyCost- effectiveness Which intervention is worth prescribing?
Is a newer treatment X worth prescribing compared with older treatment Y?
Example: economic analysis2- What is the study type (design)?
The study design of the research is fundamental to the usefulness of the study.
Participants/Sample Population:
Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.
The sample should be representative of the target population from which it came.
Who decides whether this has been achieved?
Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists.
Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups
3-Selection issues:
The following questions should be raised:
- How were subjects chosen or recruited? If not random, are they representative of the population?
- Types of Blinding (Masking) Single, Double, Triple?
- Is there a control group? How was it chosen?
- How are patients followed up? Who are the dropouts? Why and how many are there?
- Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
- Is there a statement about sample size issues or statistical power (especially important in negative studies)?
- If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
- Are there selection biases?
Researchers employ a variety of techniques to make the methodology more robust, such as
- matching,
- restriction,
- randomization, and
- blinding
Bias leads to results in which there are a systematic deviation from the truth.
This is exactly what I’ve found in all the studies on opioids: a very strong anti-opioid bias.
The studies are designed to find problems with opioids and therefore use selected samples and tortured statistics to get the results they are paid (by their source of funding) to produce.
And… doesn’t “systematic deviation from the truth” just mean “lying”?
As bias cannot be measured, researchers need to rely on good research design to minimize bias
4-What are the outcome factors and how are they measured?
- Are all relevant outcomes assessed?
- Is measurement error an important source of bias?
5-What are the study factors and how are they measured?
Data Analysis and Results:
- Assessment of the statistical significance should be evaluated
- How strong is the association between intervention and outcome?
- How precise is the estimate of the risk?
- Does it clearly mention the main finding(s) and does the data support them?
- Does it mention the clinical significance of the result?
- Is adverse event or lack of it mentioned?
- Are all relevant outcomes assessed?
- Was the sample size adequate to detect a clinically/socially significant result?
- Are the results presented in a way to help in health policy decisions?
- Is there measurement error?
- Is measurement error an important source of bias?
Confounding Factors:
A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present
6- What important potential confounders are considered?
- Are potential confounders examined and controlled for?
- Is confounding an important source of bias?
7- What is the statistical method in the study?
- Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
- Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
- Were the tests appropriate for the data?
- Are confidence intervals or p-values given?
- Are results presented as absolute risk reduction as well as relative risk reduction?
Interpretation of p-value:
The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.
Confidence interval:
Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.
8- Statistical results:
Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper.
Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.
It is important to identify if this is appropriate for the study.
- Was the sample size adequate to detect a clinically/socially significant result?
- Are the results presented in a way to help in health policy decisions?
Clinical significance:
Statistical significance as shown by p-value is not the same as clinical significance.
- Statistical significance judges whether treatment effects are explicable as chance findings, whereas
- clinical significance assesses whether treatment effects are worthwhile in real life
Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:
- If the results are statistically significant, do they also have clinical significance?
- If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?
9- What conclusions did the authors reach about the study question?
Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study.
The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies.
Again, they encourage the practice of ending every study, no matter how huge, detailed, and expensive, with that weasely phrase: “more research is needed”.
- Are the questions posed in the study adequately addressed?
- Are the conclusions justified by the data?
- Do the authors extrapolate beyond the data?
- Are shortcomings of the study addressed and constructive suggestions given for future research?
- Is the conclusion convincing?
10- Are ethical issues considered?
If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained?
Critical appraisal of RCTs: Factors to look for:
- Allocation (randomization, stratification, confounders).
- Blinding.
- Follow up of participants (intention to treat).
- Data collection (bias).
- Sample size (power calculation).
- Presentation of results (clear, precise).
- Applicability to local population.
Conclusion
Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession.
It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry.
By following a systematic approach, such evidence can be considered and applied to clinical practice.
If we look at a lot of the trash they have promoted on mass media, and a social media, it has none of these necessary qualities. The main thing is misinforming the public, actual fact based scientists, have been relatively quiet. They found that it is much easier to misinform the general public, and the anti opioid bias, is always presented as a pubic health message. The unintended consequences are secret.
We see lies, misinformation, paltering and exaggeration every day in the US media. When it comes to health, and healthcare, anything goes as long as there is a profit. Our FTC, and FDA allowed a company to sell nicotine to children because it was marketed as a wellness product. We have to remember that every publicly available source of information on healthcare, is a marketing site.
We can look at sites like Stanford Pain, where some articles, and studies, with catchy headlines get amplified by the media. People like to have their biases reinforced, and magical thinking is important too. Pain is difficult to quantify, so they can any finding they like. Most of the research shows serious expectation bias, and when patients are told that a medication could lead to lifelong heroin addiction, and horror, they are more likely to report less pain or avoid medications. This kind of bias is well know, subjects want to please the researcher. Of course they never do any research on long term outcomes, or the patients pain, when they get home, and it is 3 AM, and they can’t sleep, because of the pain.
LikeLiked by 1 person
Re “People like to have their biases reinforced”: that’s even been scientifically proven.
Sadly, it’s also been proven that smarter people resist changing their biases more. I think it’s because they/we can be more clever in finding flaws in information we don’t want to believe and are better at making good arguments for our own beliefs.
I don’t mean to be vain, but knowing this has made me double-check that I’m not doing that when I read something I think is “nonsense” when it doesn’t match my beliefs. But I also know that I’m very good at justifying my own beliefs and finding flaws in contradictory information, so I try to remain open to accepting unpalatable conclusions that go against my own biases (which has been scientifically proven to be almost impossible).
It’s truly difficult to get at the truth when there’s such a volume of information to be found online. We can always find what we want that fits our biases.
LikeLike
Smart people check themselves, and they have some background in mathematics, history and context. There is a big difference between beliefs based on years of observation, known facts, and science, and the beliefs based on marketing, wishful thinking, or a financial incentive. Pretty much anything that oversimplifies something complicated, leaves out key facts, or defies the basic rules of science, smells like nonsense.
LikeLiked by 1 person
I believe it’s due to our educational system that most people have no idea about what’s under the skin they live in. They know nothing about body structures or processes, and done know how and why medicines work. That puts them at a terrible disadvantage when those bodies start breaking down, so they have to blindly trust (or blindly distrust) any doctor to prescribe them medication.
LikeLike