By Rajshekhar Chakraborty, MD
In Eastern mythology, the swan, or hamsa, is perceived to have a legendary power of separating milk from water when mixed together, which makes it worthy of being accompanied by the Hindu goddess of knowledge and wisdom, named Saraswati. The mythical power of the swan is perhaps a subtle allegory for one of the essential traits every human being must strive to possess, that is, discrimination between the real and the unreal.
This is even more important in the 21st century as we are witnessing an information overload in several scientific disciplines, including biomedicine, which is further amplified by the ease of disseminating information. A 2010 article in PLoS Medicine had highlighted the fact that each day, 75 clinical trials and 11 systematic reviews were being added to the medical literature at that time, with no signs of a plateau in the publication rate. A quick review of MEDLINE indexing statistics reveal that more than 5,000 journals were indexed in the year 2017, with more than 24 million citations. Even in subspecialties like medical oncology and subspecialties within subspecialties, the number of journals and published articles that trainees and physicians must absorb in a timely manner is overwhelming.
How do we tackle this deluge of information with limited time and resources at our disposal? In hematology/oncology fellowship, in addition to rotating through several outpatient clinics encompassing solid tumors, malignant and classical hematology, and rigorous inpatient services, trainees are also expected to pursue clinical, laboratory, or translational research. Furthermore, keeping up with literature pertinent to patients being seen in the outpatient or inpatient settings is a critical part of the learning process. Some excellent articles (including Shreenivas, Zeidan, and Mathew in OncLive) have highlighted potential strategies that can be implemented to keep up with the literature during fellowship. However, in my opinion, an important and under-appreciated skillset that every trainee must strive to acquire early in fellowship or even during residency is critical appraisal of the medical literature. In this article, I will try to make a point as to why we should prioritize cultivating critical thinking to remain relevant in the era of information overload and artificial intelligence [AI], and provide some pragmatic solutions.
In his recent book 21 Lessons for the 21st Century, author Yuval Noah Harari has made a tall claim that in the current era, the last thing a teacher needs to give his or her pupils is more information. He argues that teachers should rather provide students with the ability to make sense of the information around them. Whether acquiring new information in the age of AI remains valuable can be debated, there is little doubt on his latter assertion that we must be able to make sense of the increasing amount of information readily available to us.
While digging into the methodology of a recent experiment on the performance of IBM Watson, I came across the fact that one of the data sources for Watson included literature hand-selected by experts at Memorial Sloan Kettering Cancer Center. Although we have not yet reached a place where evidence-based treatment guidelines can be readily generated at the point of care by an AI platform, it will likely happen in future with improvement in technology. Nevertheless, even in that scenario, who will decide what data to feed into the AI platform? What clinical trials should we exclude from consideration because they had a straw man control arm? Was the margin of non-inferiority appropriate and will the patient sitting in front of me be willing to accept the chosen upper limit of possibility of a hazardous event? That’s where there will be a greater need for clinicians with an expertise in critical thinking in the context of specific disorders.
What can we do during oncology fellowship training to inculcate critical thinking and evidence appraisal? I don’t think there is a single right strategy, but I will highlight a few resources I found helpful during my training thus far, and from which I continue to learn a great deal.
First, the JAMA User’s Guide to the Medical Literature is an excellent resource. It has been written by thought leaders in evidence-based medicine and has specific sections on evidence appraisal of randomized controlled trials, non-inferiority trials, and observational studies, among others. It provides pragmatic solutions for critically reading clinical trial publications to independently judge the strength of evidence. Furthermore, the text is replete with examples from the biomedical literature, which provide a broader perspective on how other medical and surgical subspecialties are designing clinical trials or selecting endpoints. I especially found the section on systematic reviews and meta-analysis helpful, given we have a slew of such articles in oncology literature, which warrants skepticism on the part of readers before making firm practice-changing conclusions.
Second, I enjoy reading the correspondence, editorials, blog posts, and social media discussions on published clinical trials in my area of interest. To cite a specific example, following the publication of ENDEAVOR trial comparing carfilzomib and bortezomib head-to-head in relapsed multiple myeloma, a viewpoint article was published in the ASCO Post which highlighted some of the nuances in interpretation of the trial: Were the doses and schedule of drug administration appropriate in both arms? Is the data convincing enough to change practice? I believe we as trainees should be conditioned to ask such questions whenever we are reading a clinical trial publication. Similarly, after the publication of an important meta-analysis showing the prognostic impact of minimal residual disease [MRD] in multiple myeloma, an insightful critique was published on the criteria for surrogacy and why the data was not convincing enough to accept MRD as a surrogate for overall survival in clinical trials. I find it helpful to read the original article independently and come up with my own critiques before reading editorials, correspondences, or published critiques elsewhere. It helps me reflect on my thought process and identify my weaknesses from occasions where I might have missed a critical flaw in the methodology. Additionally, there are some excellent oncology-related podcasts, like Plenary Session and Outspoken Oncology, which discuss several pragmatic issues related to clinical research and patient care, including critical appraisal of clinical trial publications.
Finally, there is no substitute for an experienced and astute clinical mentor who can guide us in the art of critically appraising data from a trial or an observational study in the context of a patient and the clinical scenario. No textbooks or journal articles can accomplish that. We all should actively seek such mentors in the clinic.
I strongly believe that learning how to hit a paper hard with difficult questions and appreciating the nuances in data interpretation should be one of the goals of oncology training, as much as staying updated with new information. Even if we succeed in developing improvised AI platforms in future that can tackle the problem of information overload by outsourcing our memory, it is hard to imagine that they will replace the need for critical appraisal of available literature and putting discoveries in the context of prior evidence and patients’ expectations.
To serve our patients well, we must assume the role of mythical swans and actively filter meaningful evidence from the vast body of literature by mastering critical thinking throughout our training and beyond.
Dr. Chakraborty is a hematology and oncology fellow at Cleveland Clinic Taussig Cancer Center. His clinical and research interest includes plasma cell disorders and patient-reported outcomes research. Follow him on twitter @rajshekharucms.