To compare structured versus nonstructured reporting of multiphasic computed tomography (CT) for staging of pancreatic cancer and the effects of both types of reporting on subjective assessment of resectability.
Materials and Methods
This institutional review board–approved, HIPAA-compliant retrospective study with waiver of informed consent included all patients who were referred for presurgical multiphasic CT of the pancreas between December 2006 and April 2011 at one institution before and after implementation (April 2008) of a structured reporting template. The template was created specifically for reporting multiphasic CT results to stage pancreatic cancer in patients and contained specific information relevant to surgical and oncologic planning. Multiphasic CT reports were assessed for the presence of 12 key features required for staging and surgical planning, including location, size, enhancement, node status, and vascular involvement. Three pancreatic surgeons evaluated the reports to assess resectability, surgical planning, and ease of extracting information before and after reviewing the multiphasic CT images blinded to the patient identifiers. The Student t test and χ2 test were used for statistical analysis.
Forty-eight (40%) structured and 72 (60%) nonstructured multiphasic CT reports were reviewed. Nonstructured reports contained a mean ± standard deviation of 7.3 key features ± 2.1 (range, 1–11) and structured reports contained 10.6 ± 0.9 (range, 9–12) features (P < .001). Information for surgical planning was deemed easily accessible in 94%, 60%, and 98% of structured and 47%, 54%, and 32% of nonstructured reports by the three surgeons, respectively (P < .001, .79, < .001). Surgeons had sufficient information for surgical planning in 96%, 69%, and 98% of structured and 31%, 43%, and 25% of nonstructured reports (P < .001, .009, and < .001). When surgeons reviewed reports in combination with multiphasic CT images, they were more likely to convert an answer of “unsure” regarding resectability to a definitive answer (ie, resectable or unresectable) when the reports were structured than when they were nonstructured.
Structured reporting of pancreatic multiphasic CT provided superior evaluation of pancreatic cancer and facilitated surgical planning. Surgeons were more confident regarding decisions about tumor resectability when they reviewed structured reports before review of multiphasic CT images.
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
OBJECTIVE. The radiology report serves as the primary method of communication about imaging findings. Traditional free-text (i.e., unstructured) radiology reporting entails dictating in a stream-of-consciousness manner. Structured reporting aims to standardize the format and lexicon used in reports. This standardization may improve the communication of findings, allowing ease of reading and comprehension. A structured reporting template may also be used as a checklist while reviewing a case, which may facilitate focused attention and analysis. The goal of this study was to compare unstructured and structured reports in terms of their completeness and effectiveness.
MATERIALS AND METHODS. Radiology trainees were given an educational lecture on the background of reporting and were provided with a structured reporting template for dictating chest radiographs. Twelve trainees completed the study. Sixty reports from before and 60 reports from after the intervention were each independently scored by four blinded physician raters for completeness and effectiveness.
RESULTS. Structured reports were found to be statistically significantly more complete and more effective than unstructured reports (mean completeness score, 4.42 vs 3.99, p < 0.001; mean effectiveness score, 4.11 vs 3.85, p < 0.001). A combined score was calculated for each report and was higher for the structured reports (mean combined score, 8.54 vs 7.83, p < 0.001).
CONCLUSION. Structured chest radiograph reports were more complete and more effective than unstructured chest radiograph reports. Although additional studies are needed for valid
Background and purpose — Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs.
Methods — We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd’s Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network’s performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network.
Results — All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen’s kappa under these conditions was 0.76.
Interpretation — This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.
Rationale and Objectives
To survey North American radiologists on current practices in structured reporting and language.
Materials and Methods
An e-mail invitation was sent to the Association of University Radiologists membership (comprising 910 members) to participate in an online survey that addressed development, use, and experience of structured reporting, language, and imaging classification or reporting systems and personal dictation styles.
Of the 910 members e-mailed, 265 (29.1%) responded, 90.6% of whom were from academic teaching hospitals. There were no significant differences in responses based on group size or region of practice. Of all the respondents, 51.3% come from groups that developed structured reporting for at least half of their reports and only 10.9% for none. A significantly fewer 13% of respondents used rigid unmodifiable structures or checklists rather than adaptable outlines; 59.5% respondents report being satisfied or very satisfied with their structured reports, whereas a significantly fewer 13% report being dissatisfied or very dissatisfied. Structured reports were reportedly significantly more likely to be required, appreciated, and to decrease errors in departments using many structured reports compared to groups with less widespread use.
Most academic radiology departments are using or experimenting with structured reports. Although radiologist satisfaction with standardization is significant, there are strong opinions about their limitations and value. Our survey suggests that North American radiologists are invested in exploring structured reporting and will hopefully inform future study on how we define a standard report and how much we can centralize this process.