Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only planning

Feb 1, 2019

Peter Klages, Ilyes Benslimane, Sadegh Riyahi, Jue Jiang, Margie Hunt, Joe Deasy, Harini Veeraraghavan, Neelam Tyagi

Authors:
Peter Klages, Ilyes Bensilmane, Sadegh Riyahi, Jue Jiang, Margie Hunt, Joseph O. Deasy, Harini Veeraraghavan*, Neelam Tyagi*
*Co-Senior Authors
Acknowledgements:
This research was supported by Philips Healthcare under a Master Research Agreement and partially supported by the NIH/NCI Cancer Center Support Grant/Core Grant (P30 CA008748). Authors would like to acknowledge Dr. Reza Farjam for his assistance with his software and access to his previous data and results while he was at Memorial Sloan Kettering.
Keywords:
Conditional Generative Adversarial Networks (cGAN), MR-guided radiotherapy, Cycle GAN, Pix2Pix, Synthetic CT Generation

Abstract
Examining two Patch-Based Conditional Generative Adversarial Neural Network models, Pix2Pix, and Cycle GAN, in the context of generating synthetic CT images for Head and Neck cancer cases is the pivot of this study. The focus has been on how these models handle common artifacts encountered in MR images such as dental implants. Evaluations were undertaken based on three major factors: Mean Absolute Error (MAE) to compare generated synthetic CT scans with original CT images, Mean Error (ME) to identify systematic average offset errors, and dosimetric evaluation of all clinically relevant structures. The results infer an optimistic potential for MR-only planning with these models.

1 Introduction

Magnetic Resonance Imaging (MRI) offers high soft-tissue contrast exceeding the capabilities of Computerized Tomography (CT), thus becoming the ideal tool for delineating tumors in Head and Neck cancer cases. However, there remains a significant challenge in deriving electron density from MRI, a crucial parameter for planning radiation therapy, thus necessitating CT scans. The study explores the efficacy of implementing Generative Adversarial Network (GAN) based models like Cycle GAN and Pix2Pix for converting MRI scans into synthetic CT images.

1.1 Background: Generative Adversarial Networks (GANs)

GANs comprise two adversary convolutional neural networks - the generator and the discriminator. These two networks learn simultaneously to generate images statistically similar to the training set. Two conditional GAN models: Pix2Pix and Cycle GAN are further explored.

1.1.1 Image-to-Image Conditional Generative Adversarial Networks (Pix2Pix)

Pix2Pix model uses a pair of CT and MR images from the same patient, and employs a generator and a discriminator network to generate synthetic images. The aim is to minimize the difference between the generated and the original paired image.

1.1.2 Cycle-consistent Generative Adversarial Networks (Cycle GAN)

Instead of requiring paired inputs like Pix2Pix, Cycle GAN employs an additional cycle consistency loss to generate images resembling those from the original modality. It ensures consistency in modal transformations, thereby decreasing the need for perfectly paired images for training.

The MR-only RT planning method explored in the study could potentially refine the radiotherapy planning process for head and neck cancers, with a focus on model robustness for consistent and accurate patient outcomes.

Sign up to AI First Newsletter

Recommended

We use our own cookies as well as third-party cookies on our websites to enhance your experience, analyze our traffic, and for security and marketing. Select "Accept All" to allow them to be used. Read our Cookie Policy.