By Masahiro Yanagawa, MD, PhD, and
Noriyuki Tomiyama, MD, PhD
Improvement in computer capacity, the expansion of computer networks, and the emergence of big data have resulted in a boom in artificial intelligence (AI). AI is structured by building a model that can imitate the human brain. Deep Learning (DL) is one of the AI systems based on the neural network that is used in thoracic imaging to detect pulmonary nodules and to reduce false positives through improved diagnostic accuracy. The neural network begins by simulating neural cells and trying to simulate the human brain. This simulation model is called perceptron. By making layers with perceptron and then arranging these layers, multilayer perceptron is constructed and, thereby, all nodes are fully connected in the model. This system, therefore, can solve more complicated problems than conventional computer systems, which is generally used in standard imaging (Fig. 1).
In addition to detection of pulmonary nodules, AI technology has been applied to and developed in other areas of thoracic imaging. Malignant disease can be distinguished from benign disease, and segmentation and measurement of disease, through 3D analysis, is also possible. Diffuse lung diseases can be diagnosed using a case-retrieval system that uses a database of images, as well as literature searches to help radiologically differentiate disease. Image quality is also improved due to use of noise-reduction algorithms, such as PixelShine, to create iterative reconstructions of images.
We, the authors, were fortunate to have the opportunity to present research on lung adenocarcinoma using a DL system during the 2017 the Radiological Society of North America Scientific Assembly and Annual Meeting. Our research compared the results of radiologic prediction of pathologic invasiveness in lung adenocarcinoma among three radiologists and a DL system. The 3D Convolutional Neural Network (3D-CNN) was developed in conjunction with the department of technology in our institution (Hirohiko Niioka, Institute for Datability Science) and was used as the DL system (Tensorfl ow version 0.12.1) in the study.
The 3D model of the convolutional neural network (CNN) structure (CNN being the mainstream of DL in the field of image recognition) was constructed with two successive pairs of convolution (used in mathematical calculations for model processing) and max-pooling (used to correct image displacement) layers and two fully connected perceptron layers. The output layer was composed of two nodes for the two conditions: adenocarcinoma in situ (AIS) and non-AIS. Although only a small set of CT images was used as training data, the DL system produced an accuracy rate that was almost identical to that of the radiologists. In addition, the area under the curve (AUC) for the DL system was almost the same as the AUC for the most experienced radiologist, who had a significantly higher AUC than the radiologist with the least amount of experience.
Although the process that the DL system used to reach its conclusions is unknown, DL systems in the future may be able to predict pathologic invasiveness in lung adenocarcinoma from CT images, resulting in differentiating AIS, minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IVA). Radiologists often diagnose pulmonary Deep Learning Systems from page 8 nodules by morphologically evaluating the margins and internal characteristics according to previous data. Naturally, there are limitations to the diagnostic performance. For example, localized ground-glass nodules (GGN) include all pathologic subtypes of adenocarcinoma (i.e., AIS, MIA, and IVA). However, most radiologists cannot differentiate IVA from GGN on CT images. Therefore, unlike nodule detection, in differentiating between benign and malignant disease, there might be some cases in which a DL system identifies malignant lesions that the radiologist cannot believe or verify. Therefore, much higher accuracy and sensitivity will be needed for DL to contribute to malignancy diagnosis.
Although DL can provide useful and informative results, it does not replace the radiologist. Nor does DL have the ability to manage and decide treatment strategies, so it is, therefore, important for radiologists to intelligently use the information derived from DL. We expect that future research will focus on the use of AI technology in different areas of radiology. ✦
About the Authors: Dr. Yanagawa is with the Department of Radiology, Osaka University Graduate School of Medicine, Osaka, Japan. Dr. Tomiyama is with the Department of Radiology, Osaka University Graduate School of Medicine, Osaka, Japan.