TY - GEN
T1 - QoE and power efficiency tradeoff for fog computing networks with fog node cooperation
AU - Xiao, Yong
AU - Krunz, Marwan
N1 - Funding Information:
This research was supported in part by the National Science Foundation (grants # IIP-1265960, IIP-1535573, CNS-1563655, and CNS-1409172). Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the author(s) and do not necessarily reflect the views of NSF.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/10/2
Y1 - 2017/10/2
N2 - This paper studies the workload offloading problem for fog computing networks in which a set of fog nodes can offload part or all the workload originally targeted to the cloud data centers to further improve the quality-of-experience (QoE) of users. We investigate two performance metrics for fog computing networks: users' QoE and fog nodes' power efficiency. We observe a fundamental tradeoff between these two metrics for fog computing networks. We then consider cooperative fog computing networks in which multiple fog nodes can help each other to jointly offload workload from cloud data centers. We propose a novel cooperation strategy referred to as offload forwarding, in which each fog node, instead of always relying on cloud data centers to process its unprocessed workload, can also forward part or all of its unprocessed workload to its neighboring fog nodes to further improve the QoE of its users. A distributed optimization algorithm based on distributed alternating direction method of multipliers (ADMM) via variable splitting is proposed to achieve the optimal workload allocation solution that maximizes users' QoE under the given power efficiency. We consider a fog computing platform that is supported by a wireless infrastructure as a case study to verify the performance of our proposed framework. Numerical results show that our proposed approach significantly improves the performance of fog computing networks.
AB - This paper studies the workload offloading problem for fog computing networks in which a set of fog nodes can offload part or all the workload originally targeted to the cloud data centers to further improve the quality-of-experience (QoE) of users. We investigate two performance metrics for fog computing networks: users' QoE and fog nodes' power efficiency. We observe a fundamental tradeoff between these two metrics for fog computing networks. We then consider cooperative fog computing networks in which multiple fog nodes can help each other to jointly offload workload from cloud data centers. We propose a novel cooperation strategy referred to as offload forwarding, in which each fog node, instead of always relying on cloud data centers to process its unprocessed workload, can also forward part or all of its unprocessed workload to its neighboring fog nodes to further improve the QoE of its users. A distributed optimization algorithm based on distributed alternating direction method of multipliers (ADMM) via variable splitting is proposed to achieve the optimal workload allocation solution that maximizes users' QoE under the given power efficiency. We consider a fog computing platform that is supported by a wireless infrastructure as a case study to verify the performance of our proposed framework. Numerical results show that our proposed approach significantly improves the performance of fog computing networks.
KW - ADMM
KW - Fog computing
KW - Offload forwarding
KW - Power efficiency
KW - Response-time analysis
UR - http://www.scopus.com/inward/record.url?scp=85034042932&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85034042932&partnerID=8YFLogxK
U2 - 10.1109/INFOCOM.2017.8057196
DO - 10.1109/INFOCOM.2017.8057196
M3 - Conference contribution
AN - SCOPUS:85034042932
T3 - Proceedings - IEEE INFOCOM
BT - INFOCOM 2017 - IEEE Conference on Computer Communications
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE Conference on Computer Communications, INFOCOM 2017
Y2 - 1 May 2017 through 4 May 2017
ER -