TY - GEN
T1 - Ai-based robust convex relaxations for supporting diverse qos in next-generation wireless systems
AU - Chan, Steve
AU - Krunz, Marwan
AU - Griffin, Bob
N1 - Funding Information:
The authors would like to thank Vit Tall and the University of Arizona for the collaborative framework pertaining to this 5G/B5G/6G-related
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7
Y1 - 2021/7
N2 - Supporting diverse Quality of Service (QoS) requirements in 5G and beyond wireless systems often involves solving a succession of convex optimization problems, with varied approaches to optimally resolve each problem. Even when the input set is specifically designed/architected to segue to a convex paradigm, the resultant output set may still turn out to be nonconvex, thereby necessitating a transformation to a convex optimization problem via certain relaxation techniques. This transformation in itself may spawn yet other nonconvex optimization problems, highlighting the need/opportunity to utilize a Robust Convex Relaxation (RCR) framework. In this paper, we explore a particular class of Convolutional Neural Networks (CNNs), namely Deep Convolutional Generative Adversarial Network (DCGANs), to solve not only the QoS-related convex optimization problems but also to leverage the same RCR mechanism for tuning its own hyperparameters. This approach gives rise to various technical challenges. For example, Particle Swarm Optimization (PSO) is often used for hyperparameter reduction/tuning. When implemented on a DCGAN, PSO requires converting continuous/discontinuous hyperparameters to discrete values, which may result in premature stagnation of particles at local optima. The involved implementation mechanics, such as increasing the inertial weighting, may spawn yet other convex optimization problems. We introduce a RCR framework that capitalizes upon the feed-forward structure of the 'You Only Look Once' (YOLO)- based DCGAN. Specifically, we use a squeezed Deep Convolutional-YOLO-Generative Adversarial Network (DC-YOLO-GAN), hereinafter referred to as a Modified Squeezed YOLO v3 Implementation (MSY3I), combined with convex relaxation adversarial training to improve the bound tightening for each successive neural network layer and to better facilitate the global optimization via a specific numerical stability implementation within MSY3I.
AB - Supporting diverse Quality of Service (QoS) requirements in 5G and beyond wireless systems often involves solving a succession of convex optimization problems, with varied approaches to optimally resolve each problem. Even when the input set is specifically designed/architected to segue to a convex paradigm, the resultant output set may still turn out to be nonconvex, thereby necessitating a transformation to a convex optimization problem via certain relaxation techniques. This transformation in itself may spawn yet other nonconvex optimization problems, highlighting the need/opportunity to utilize a Robust Convex Relaxation (RCR) framework. In this paper, we explore a particular class of Convolutional Neural Networks (CNNs), namely Deep Convolutional Generative Adversarial Network (DCGANs), to solve not only the QoS-related convex optimization problems but also to leverage the same RCR mechanism for tuning its own hyperparameters. This approach gives rise to various technical challenges. For example, Particle Swarm Optimization (PSO) is often used for hyperparameter reduction/tuning. When implemented on a DCGAN, PSO requires converting continuous/discontinuous hyperparameters to discrete values, which may result in premature stagnation of particles at local optima. The involved implementation mechanics, such as increasing the inertial weighting, may spawn yet other convex optimization problems. We introduce a RCR framework that capitalizes upon the feed-forward structure of the 'You Only Look Once' (YOLO)- based DCGAN. Specifically, we use a squeezed Deep Convolutional-YOLO-Generative Adversarial Network (DC-YOLO-GAN), hereinafter referred to as a Modified Squeezed YOLO v3 Implementation (MSY3I), combined with convex relaxation adversarial training to improve the bound tightening for each successive neural network layer and to better facilitate the global optimization via a specific numerical stability implementation within MSY3I.
KW - 5g networks
KW - Convex relaxation
KW - Deep convolutional generative adversarial networks
KW - Mixed integer non-linear programming
KW - Nonconvex optimization
KW - Numerical implementation
KW - Particle swarm optimization
KW - Quality of service
KW - Robust convex relaxation framework
KW - You only look once
UR - http://www.scopus.com/inward/record.url?scp=85116945874&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116945874&partnerID=8YFLogxK
U2 - 10.1109/ICDCSW53096.2021.00014
DO - 10.1109/ICDCSW53096.2021.00014
M3 - Conference contribution
AN - SCOPUS:85116945874
T3 - Proceedings - 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops, ICDCSW 2021
SP - 41
EP - 48
BT - Proceedings - 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops, ICDCSW 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 41st IEEE International Conference on Distributed Computing Systems Workshops, ICDCSW 2021
Y2 - 7 July 2021 through 10 July 2021
ER -