Generative Adversarial Networks (GAN) are used to generate synthetic samples while closely following the underlying distribution of a real data set. While GANs have recently gained significant popularity, they often suffer from the mode collapse problem, where the generated samples lack diversity. Moreover, some approaches that attempt to resolve the model collapse problem do not necessarily yield high quality synthetic samples. In this paper, we propose two novel performance metrics, namely mode-collapse divergence (MCD) which quantifies the extent of mode collapse for a GAN architecture. Second, we propose the metric Generative Quality Score (GQS), which measures the quality of generated samples. We present a comprehensive study of the performance of various GAN architectures proposed in the literature through the lens of MCD and GQS, for three different data sets, namely MNIST, Fashion MNIST and CIFAR-10.