Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks
In this paper, the authors explore the use of Deep Convolutional Neural Networls (DCNN) in classifying Tuberculosis (TB) in chest radiographs. One of the advantages of deep learning is its ability to excel with high dimensional datasets, such as images, which can be represented at multiple levels.
The ensembling technique was used to increase the AUC even further. Ensembles were performed by taking different weighted averages of the probability scores generated by the classifiers
The best performing ensemble model had an AUC of 0.99. Refer below the table borrowed from the paper for complete results
The sensitivity of pre-trained AlexNet was 92.0% and the specificity was 94.7%. The sensitivity of pre-trained GoogLeNet was 92.0% and the specificity was 98.7%. The sensitivity of the ensemble was 97.3% and the specificity was 94.7%.
For cases where the AlexNet and GoogLeNet classifiers had disagreement, an independent board-certified cardiothoracic radiologist (B.S., with 18 years of experience) blindly interpreted the images as either having manifestations of TB or as normal. This resulted in a sensitivity of 97.3% and a specificity of 100%.
Dataset
Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs.DCNN Models and Training
AlexNet and GoogLeNet models, including pre-trained (on ImageNet from Caffe Model Zoo) and untrained models were used in the study. It was found that the AUCs of the pretrained networks were greater. The following solver parameters were used for training: 120 epochs; base learning rate for untrained models and for pretrained models, 0.01 and 0.001, respectively with stochastic gradient descent. Both of the DCNNs in this studied used dropout or model regularization strategies to help overcome overfitting.Data Augmentation
The following data augmentation techniques further increased the performance- Random cropping of 227x227
- Mean subtraction and mirror images
- Rotation of 90, 180 and 270.
- Contrast Limited Adaptive Histogram Equalization processing
Ensembling
The ensembling technique was used to increase the AUC even further. Ensembles were performed by taking different weighted averages of the probability scores generated by the classifiers
The best performing ensemble model had an AUC of 0.99. Refer below the table borrowed from the paper for complete results
The sensitivity of pre-trained AlexNet was 92.0% and the specificity was 94.7%. The sensitivity of pre-trained GoogLeNet was 92.0% and the specificity was 98.7%. The sensitivity of the ensemble was 97.3% and the specificity was 94.7%.
Radiologist-augmented approach
This is were the paper takes turn to beyond the realms of deep learning, were they use a certain human to classify the images were the models fail.For cases where the AlexNet and GoogLeNet classifiers had disagreement, an independent board-certified cardiothoracic radiologist (B.S., with 18 years of experience) blindly interpreted the images as either having manifestations of TB or as normal. This resulted in a sensitivity of 97.3% and a specificity of 100%.
Comments
Post a Comment