Supporting diverse Quality of Service (QoS) requirements in 5G and beyond wireless systems often involves solving a succession of convex optimization problems, with varied approaches to optimally resolve each problem. Even when the input set is specifically designed/architected to segue to a convex paradigm, the resultant output set may still turn out to be nonconvex, thereby necessitating a transformation to a convex optimization problem via certain relaxation techniques. This transformation in itself may spawn yet other nonconvex optimization problems, highlighting the need/opportunity to utilize a Robust Convex Relaxation (RCR) framework. In this paper, we explore a particular class of Convolutional Neural Networks (CNNs), namely Deep Convolutional Generative Adversarial Network (DCGANs), to solve not only the QoS-related convex optimization problems but also to leverage the same RCR mechanism for tuning its own hyperparameters. This approach gives rise to various technical challenges. For example, Particle Swarm Optimization (PSO) is often used for hyperparameter reduction/tuning. When implemented on a DCGAN, PSO requires converting continuous/discontinuous hyperparameters to discrete values, which may result in premature stagnation of particles at local optima. The involved implementation mechanics, such as increasing the inertial weighting, may spawn yet other convex optimization problems. We introduce a RCR framework that capitalizes upon the feed-forward structure of the 'You Only Look Once' (YOLO)- based DCGAN. Specifically, we use a squeezed Deep Convolutional-YOLO-Generative Adversarial Network (DC-YOLO-GAN), hereinafter referred to as a Modified Squeezed YOLO v3 Implementation (MSY3I), combined with convex relaxation adversarial training to improve the bound tightening for each successive neural network layer and to better facilitate the global optimization via a specific numerical stability implementation within MSY3I.