• Birk Harboe posted an update 1 week, 2 days ago

    There is an urgent need to bring forth portable, low-cost, point-of-care diagnostic instruments to monitor patient health and wellbeing. This is elevated by the COVID-19 global pandemic in which the availability of proper lung imaging equipment has proven to be pivotal in the timely treatment of patients. Electrical impedance tomography (EIT) has long been studied and utilized as such a critical imaging device in hospitals especially for lung ventilation. Despite decades of research and development, many challenges remain with EIT in terms of 1) optimal image reconstruction algorithms, 2) simulation and measurement protocols, 3) hardware imperfections, and 4) uncompensated tissue bioelectrical physiology. Due to the inter-connectivity of these challenges, singular solutions to improve EIT performance continue to fall short of the desired sensitivity and accuracy. Motivated to gain a better understanding and optimization of the EIT system, we report the development of a bioelectric facsimile simulator demonstrating the dynamic operations, sensitivity analysis, and reconstruction outcome prediction of the EIT sensor with stepwise visualization. By building a sandbox platform to incorporate full anatomical and bioelectrical properties of the tissue under study into the simulation, we created a tissue-mimicking phantom with adjustable EIT parameters to interpret bioelectrical interactions and to optimize image reconstruction accuracy through improved hardware setup and sensing protocol selections.A significant challenge for brain histological data analysis is to precisely identify anatomical regions in order to perform accurate local quantifications and evaluate therapeutic solutions. Usually, this task is performed manually, becoming therefore tedious and subjective. Another option is to use automatic or semi-automatic methods, among which segmentation using digital atlases co-registration. However, most available atlases are 3D, whereas digitized histological data are 2D. Methods to perform such 2D-3D segmentation from an atlas are required. This paper proposes a strategy to automatically and accurately segment single 2D coronal slices within a 3D volume of atlas, using linear registration. We validated its robustness and performance using an exploratory approach at whole-brain scale.Lung segmentation represents a fundamental step in the development of computer-aided decision systems for the investigation of interstitial lung diseases. In a holistic lung analysis, eliminating background areas from Computed Tomography (CT) images is essential to avoid the inclusion of noise information and spend unnecessary computational resources on non-relevant data. However, the major challenge in this segmentation task relies on the ability of the models to deal with imaging manifestations associated with severe disease. Based on U-net, a general biomedical image segmentation architecture, we proposed a light-weight and faster architecture. In this 2D approach, experiments were conducted with a combination of two publicly available databases to improve the heterogeneity of the training data. Results showed that, when compared to the original U-net, the proposed architecture maintained performance levels, achieving 0.894 ± 0.060, 4.493 ± 0.633 and 4.457 ± 0.628 for DSC, HD and HD-95 metrics, respectively, when using all patients from the ILD database for testing only, while allowing a more effficient computational usage. Quantitative and qualitative evaluations on the ability to cope with high-density lung patterns associated with severe disease were conducted, supporting the idea that more representative and diverse data is necessary to build robust and reliable segmentation tools.Deep Neural Networks using histopathological images as an input currently embody one of the gold standards in automated lung cancer diagnostic solutions, with Deep Convolutional Neural Networks achieving the state of the art values for tissue type classification. One of the main reasons for such results is the increasing availability of voluminous amounts of data, acquired through the efforts employed by extensive projects like The Cancer Genome Atlas. Nonetheless, whole slide images remain weakly annotated, as most common pathologist annotations refer to the entirety of the image and not to individual regions of interest in the patient’s tissue sample. Recent works have demonstrated Multiple Instance Learning as a successful approach in classification tasks entangled with this lack of annotation, by representing images as a bag of instances where a single label is available for the whole bag. Thus, we propose a bag/embedding-level lung tissue type classifier using Multiple Instance Learning, where the automated inspection of lung biopsy whole slide images determines the presence of cancer in a given patient. Furthermore, we use a post-model interpretability algorithm to validate our model’s predictions and highlight the regions of interest for such predictions.Accurately estimating all strain components in quasi-static ultrasound elastography is crucial for the full analysis of biological media. In this paper, 2D strain tensor imaging is investigated, using a partial differential equation (PDE)-based regularization method. More specifically, this method employs the tissue property of incompressibility to smooth the displacement fields and reduce the noise in the strain components. The performance of the method is assessed with phantoms and in vivo breast tissues. For all the media examined, the results showed a significant improvement in both lateral displacement and strain but also, to a lesser extent, in the shear strain. Moreover, axial displacement and strain were only slightly modified by the regularization, as expected. Finally, the easier detectability of the inclusion/lesion in the final lateral strain images is associated with higher elastographic contrast-to-noise ratios (CNRs), with values in the range [0.68 – 9.40] vs [0.09 – 0.38] before regularization.Artifacts and defects in Cone-beam Computed Tomography (CBCT) images are a problem in radiotherapy and surgical procedures. Unsupervised learning-based image translation techniques have been studied to improve the image quality of head and neck CBCT images, but there have been few studies on improving the image quality of abdominal CBCT images, which are strongly affected by organ deformation due to posture and breathing. In this study, we propose a method for improving the image quality of abdominal CBCT images by translating the numerical values to the values of corresponding paired CT images using an unsupervised CycleGAN framework. This method preserves anatomical structure through adversarial learning that translates voxel values according to corresponding regions between CBCT and CT images of the same case. The image translation model was trained on 68 CT-CBCT datasets and then applied to 8 test datasets, and the effectiveness of the proposed method for improving the image quality of CBCT images was confirmed.Detection of lung contour on chest X-ray images (CXRs) is a necessary step for computer-aid medical imaging analysis. Because of the low-intensity contrast around lung boundary and large inter-subject variance, it is challenging to detect lung from structural CXR images accurately. To tackle this problem, we design an automatic and hybrid detection network containing two stages for lung contour detection on CXRs. In the first stage, an image preprocessing stage based on a deep learning model is used to automatically extract coarse lung contours. In the second stage, a refinement step is used to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. The model is evaluated on several public datasets, and experiments demonstrate that the performance of the proposed method outperforms state-of-the-art methods.Clinical Relevance- This can help radiologists for automatic separate lung, which can decrease the workloads of the radiologists’ manually delineated lung contour in CXRs.The diagnosis and treatment of eye diseases is heavily reliant on the availability of retinal imagining equipment. To increase accessibility, lower-cost ophthalmoscopes, such as the Arclight, have been developed. However, a common drawback of these devices is a limited field of view. The narrow-field-of-view images of the eye can be concatenated to replicate a wide field of view. However, it is likely that not all angles of the eye are captured, which creates gaps. This limits the usefulness of the images in teaching, wherefore, artist’s impressions of retinal pathologies are used. Recent research in the field of computer vision explores the automatic completion of holes in images by leveraging the structural understanding of similar images gained by neural networks. Specifically, generative adversarial networks are explored, which consist of two neural networks playing a game against each other to facilitate learning. We demonstrate a proof of concept for the generative image inpainting of retinal images using generative adversarial networks. Our work is motivated by the aim of devising more realistic images for medical teaching purposes. We propose the use of a Wasserstein generative adversarial network with a semantic image inpainting algorithm, as it produces the most realistic images.Clinical relevance- The research shows the use of generative adversarial networks in generating realistic training images.The earlier studies on brain vasculature semantic segmentation used classical image analysis methods to extract the vascular tree from images. Nowadays, deep learning methods are widely exploited for various image analysis tasks. One of the strong restrictions when dealing with neural networks in the framework of semantic segmentation is the need to dispose of a ground truth segmentation dataset, on which the task will be learned. It may be cumbersome to manually segment the arteries in a 3D volumes (MRA-TOF typically). selleck In this work, we aim to tackle the vascular tree segmentation from a new perspective. Our objective is to build an image dataset from mouse vasculatures acquired using CT-Scans, and enhance these vasculatures in such a way to precisely mimic the statistical properties of the human brain. The segmentation of mouse images is easily automatized thanks to their specific acquisition modality. Thus, such a framework allows to generate the data necessary for the training of a Convolutional Neural Network – i.e. the enhanced mouse images and there corresponding ground truth segmentation – without requiring any manual segmentation procedure. However, in order to generate an image dataset having consistent properties (strong resemblance with MRA images), we have to ensure that the statistical properties of the enhanced mouse images do match correctly the human MRA acquisitions. In this work, we evaluate at length the similarities between the human arteries as acquired on MRA-TOF and the “humanized” mouse arteries produced by our model. Finally, once the model duly validated, we experiment its applicability with a Convolutional Neural Network.Primary Live Cancer (PLC) is the sixth most common cancer worldwide and its occurrence predominates in patients with chronic liver diseases and other risk factors like hepatitis B and C. Treatment of PLC and malignant liver tumors depend both in tumor characteristics and the functional status of the organ, thus must be individualized for each patient. Liver segmentation and classification according to Couinaud’s classification is essential for computer-aided diagnosis and treatment planning, however, manual segmentation of the liver volume slice by slice can be a time-consuming and challenging task and it is highly dependent on the experience of the user. We propose an alternative automatic segmentation method that allows accuracy and time consumption amelioration. The procedure pursues a multi-atlas based classification for Couinaud segmentation. Our algorithm was implemented on 20 subjects from the IRCAD 3D data base in order to segment and classify the liver volume in its Couinaud segments, obtaining an average DICE coefficient of 0.