Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification

Joyseeree, Ranveer (ETH Zürich, Switzerland) ; Otálora, Sebastian (University of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis); University of Geneva, Geneva, Switzerland) ; Müller, Henning (University of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis)) ; Depeursinge, Adrien (University of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis))

A novel method to detect and classify several classes of diseased and healthy lung tissue in CT (Computed Tomography), based on the fusion of Riesz and deep learning features, is presented. First, discriminative parametric lung tissue texture signatures are learned from Riesz representations using a one–versus–one approach. The signatures are generated for four diseased tissue types and a healthy tissue class, all of which frequently appear in the publicly available Interstitial Lung Diseases (ILD) dataset used in this article. Because the Riesz wavelets are steerable, they can easily be made invariant to local image rotations, a property that is desirable when analyzing lung tissue micro–architectures in CT images. Second, features from deep Convolutional Neural Networks (CNN) are computed by fine–tuning the Inception V3 architecture using an augmented version of the same ILD dataset. Because CNN features are both deep and non–parametric, they can accurately model virtually any pattern that is useful for tissue discrimination, and they are the de facto standard for many medical imaging tasks. However, invariance to local image rotations is not explicitly implemented and can only be approximated with rotation–based data augmentation. This motivates the fusion of Riesz and deep CNN features, as the two techniques are very complementary. The two learned representations are combined in a joint softmax model for final classification, where early and late feature fusion schemes are compared. The experimental results show that a late fusion of the independent probabilities leads to significant improvements in classification performance when compared to each of the separate feature representations and also compared to an ensemble of deep learning approaches.


Keywords:
Article Type:
scientifique
Faculty:
Economie et Services
School:
HEG-VS
Institute:
Institut Informatique de gestion
Subject(s):
Informatique
Date:
2019-08
Published in:
Medical image analysis
Numeration (vol. no.):
August 2019, vol. 56, pp. 172-183
DOI:
ISSN:
1361-8415
Appears in Collection:



 Record created 2019-10-22, last modified 2019-10-22

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)