Advances in image processing have given radiologists new tools to find the abnormalities they're looking for. But Dr. Geoffrey Rubin believes that radiologists need better tools -- and quickly -- before they drown in a flood of data being produced by the new generation of CT scanners.
That's according to a talk he gave at the 2014 International Symposium on Multidetector-Row CT (MDCT 2014). Rubin is a professor of cardiovascular research, radiology, and bioengineering at Duke University, and he is also program director of the International Society for Computed Tomography (ISCT), which hosted MDCT 2014 earlier this month in San Francisco.
It's true that image processing software keeps getting faster and more useful. It is also increasingly adaptable to a variety of routine clinical questions, helping radiologists do a better job of finding what they need to see on CT data.
There are even a handful of tools that stack up combinations of functions to perform specific tasks, such as reading CT colonography data from start to finish. Automated or semiautomated software tools are creating images that help referring physicians and patients understand what's happening in the anatomy and how problems can be remedied.
But real advances have been too slow, uncoordinated, and underpowered to help radiologists get through their workday in an era when reimbursement is falling at the same time that more older Americans are retiring and seeking care, which is causing patient loads to rise steadily, Rubin said.
"It's interesting to see where we're going, but I think it's also interesting to see what we don't have yet, and where I'm not seeing much progress," Rubin said. "And I'd like to communicate to the industry areas that I think are particularly important."
Fair enough. On the plus side, image processing in 2014 is providing real value in delivering a diagnosis. Postprocessing is getting better at showing anatomy and pathology, and hence the diagnosis. Images can be reformatted in different ways that make them intuitively explanatory to referring physicians and patients and, importantly, reimbursable, Rubin said.
So when eyeing a large descending thoracic aortic aneurysm in a stack of transverse sections doesn't yield a clear picture, a multiplanar reformat of the same image can reveal the complex appearance of an artery, for example, and the source of the problem with a stent graft, he said.
In thoracic CT, radiologists can use maximum intensity projections to get a better look at lung nodules, and minimum intensity projections to show the heterogeneity of parenchymal aeration.
When approaching a kidney transplant, surgeons need to know about the vascular supply and how to explain the problem to the patient. When vascular anomalies are present, creating a short movie clip can reveal the problem and the treatment plan in a way that engages both patients and referring physicians. And just as the tools become more useful, they are becoming easier to access, he said.
"Increasingly, we're seeing these tools becoming cloud-based -- rather than having software that resides on our computer in a reading room and images on the computer," Rubin said. "That software and those images can exist on a server remotely, and our interaction with the data is controlled through a browser and can be manipulated."
Cloud-based software also eliminates concerns about updating each workstation as updates appear, because the software always resides on the remote server where it is continually updated, he said.
These days, processing tools are increasingly integrated with PACS networks and often directed at a specific workflow process, with multistep software solutions designed to address common tasks, such as volumetric rendering of lung nodules.
"Along with this is the notion of integration of PACS and vendor-neutral archives [VNAs]," Rubin said. "Being able to use these tools on VNAs makes [them] readily available to a broad spectrum of practitioners in the healthcare enterprise."
Combinations of software tools into "purpose-filled solutions" -- processing tools combined to answer a specific clinical question that requires image processing, such as CT colonography workflow -- are also coming into their own.
"In other words, we have for a long time had the basic tools -- screwdrivers, nails, and hammers -- but we now have a conveyer belt that takes us along to put together the processing needed for a very specific task," Rubin said.
But as useful as the software has become, there is much more that it doesn't do, even when the basic technology exists to create a better, more intuitive product. The slow pace of innovation and the lack of comprehensive solutions are where vendor innovation has failed, he said.
Unavailable in a store near you
Topping Rubin's wish list is volume rendering that adjusts to variations in enhancement and attenuation characteristics across the image. Not at a single setting for the entire image, but functionality that can vary throughout the image "so that regardless of the patient, I can have consistent visualization," Rubin said.
Automated Lesion Detection, Registration Of Prior Imaging Exams, Sizing, and Cataloging
The tool to look for is one that automatically detects lesions, registers them with prior images, and finally sizes and catalogs the findings in one step, Rubin said. Simple cataloging software does exist, but there is no comprehensive tool that puts it all together with priors and measurements based on the Response Evaluation Criteria in Solid Tumors (RECIST).
"I'd like a normality mask -- a tool that basically recognizes normality in the volume, says it's normal, and masks it out -- so that as a radiologist I'm only looking at what is abnormal, and I can turn off the mask if I so desire," Rubin said.
Comprehensive Organ Segmentation
By the time the radiologist reads a CT scan, the computer should have already segmented every organ and every structure so that lesions are automatically registered to those organs of interest, he said.
Rubin would like to have computer-aided detection (CAD) of a full spectrum of findings that are presented in a list format or in a parametric map. It should basically say that the computer has analyzed the CT scan and present a list of abnormal areas that are referenced to specific structures, asking the radiologist to confirm and apply his or her interpretation.
"From my perspective, innovation in CT processing is coming a bit too slowly," Rubin said, as none of the functions on the wish list are available, even though they could be. Some 76 million CT scans were performed in 2013, in a population that's rapidly aging, driving the delivery of even more CT scans going forward.
"We have super scanners that you are seeing presented at this meeting that are generating ever more images per dataset, and we're only human," Rubin said. The radiologists responsible for reading them have only so much attention to apply during the day; at the same time, they are expected to work through more and more datasets.
"We basically need processing robots to help us maximize what we can do with these datasets," Rubin said. "Where is computer vision in the industry to take us to where we need to go, so that we can really use these tools to be more efficient?" Efficiency can and should increase by a factor of 10, he said.
By Eric Barnes, AuntMinnie.com staff writer