TY - JOUR
T1 - Deep learning approach to classification of lung cytological images
T2 - Two-step training using actual and synthesized images by progressive growing of generative adversarial networks
AU - Teramoto, Atsushi
AU - Tsukamoto, Tetsuya
AU - Yamada, Ayumi
AU - Kiriyama, Yuka
AU - Imaizumi, Kazuyoshi
AU - Saito, Kuniaki
AU - Fujita, Hiroshi
N1 - Publisher Copyright:
© 2020 Teramoto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2020
Y1 - 2020
N2 - Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN’s performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.
AB - Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN’s performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.
UR - http://www.scopus.com/inward/record.url?scp=85081264445&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081264445&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0229951
DO - 10.1371/journal.pone.0229951
M3 - Article
C2 - 32134949
AN - SCOPUS:85081264445
SN - 1932-6203
VL - 15
JO - PloS one
JF - PloS one
IS - 3
M1 - e0229951
ER -