Consider using a checklist based on the type of article you're reviewing. Use these links to find checklist options, and be sure to identify the type of article you're reviewing beforehand so that you can choose the correct checklist.
Critical Appraisal Skills Program Checklists
BMJ Best Practice Critical Appraisal Tools
Joanna Briggs Institute Checklists
Critical appraisal is reviewing the evidence to determine whether a research study/article is trustworthy, relevant, and impactful. Basically, we're checking the study for bias so we can make informed decisions about whether or not the research is valuable enough to change practice or consider in future research projects.
Think back to our step by step guide of EBP:
Start by identifying the type of evidence you have -- that's the type of study you're evaluating: RCT, systematic review, cohort study, case study, etc. Once you know, you can use a checklist (see the list on the left) and start asking (and answering) questions.
It might seem like a difficult skill to cultivate, but it gets easier with practice. The goal is to think critically while reading and ask yourself:
Read through this list as you go through your articles. Consider looking at them side by side, and marking or highlighting when you find (or don't find) these elements in your articles.
Title: should be clear, concise and convey the main concepts, hypotheses, methods and variables involved in the study. Usually 12-15 words or a certain number of characters.
Abstract: is the next most frequently read part of the article and will determine if a reader decides to read the entire article. Usually 100-250 words that includes the purpose or aims, hypotheses, sample, methods and summary of results.
Introduction: orients the reader to the subject and provides background (problem, evidence from other studies) on why this study is important. Includes the aim and hypotheses for the study. Where you may find the framework used to conduct the study.
Methods: section describes how the study was done and provide enough information for a reader to replicate the study. This section is key to determining the validity of the study. Should describe the sample population and variables, instruments used to measure variables and their reliability and validity, materials used, the study design, procedures and statistical tests to be used.
Results: section provides all relevant statistical results including results that don’t support the hypotheses. Usually will find tables and graphs that are meant to summarize findings and those also should be analyzed and critiqued.
Discussion section analyzes the results in relation to the research question(s) and/or hypotheses. Should make it clear if the hypotheses were supported. If hypotheses are not supported what is a possible explanation for the unexpected results? Should also discuss limitations of the study and recommendations for future study.
References are listed and formatted according to the journal protocol. Few scientific journals use APA. All references listed in the body of the article must be included in the reference list.
You'll often see people "grade" an article or study with a level of evidence after appraising it. These levels help us think critically about whether or not we'd make a practice change based on current research. There are a lot of different levels of evidence tools online. Start with the chart below and use the Johns Hopkins 2017 model if you need a more in depth analysis.
Claude Moore Health Sciences Library 1350 Jefferson Park Avenue P.O. Box 800722 Charlottesville, VA 22908 (Directions)
© 2023 by the Rector and Visitors of the University of Virginia
Copyright & Privacy