TY - GEN
T1 - Non-Blocking In-Network Caching for High-Capacity Content Routers
AU - Pan, Tian
AU - Lin, Xingchen
AU - Huang, Tao
AU - Li, Hao
AU - Lv, Jianhui
AU - Zhang, Beichuan
N1 - Funding Information:
This work is supported by National Natural Science Foundation of China (61702049, 61702407) and Huawei Innovation Research Program.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/4
Y1 - 2019/4
N2 - Unlike IP router's stateless forwarding model, a content router owns a sophisticated data plane, consisting of a three-stage pipeline, namely, FIB, PIT, CS. Generally, a pipeline runs only as fast as its slowest stage. Compared with PIT and FIB, CS design is more challenging because it has more data to read/write, may have more entries in its table to store and lookup, and needs to organize content objects to sustain frequent cache replacement. To address CS's performance issue, we propose a novel mechanism called 'NB-Cache' from a network-wide point of view rather than a single router's. In NB-Cache, when packets arrive at a router whose CS is fully loaded, instead of being blocked and waiting for the CS, these packets are forwarded to the next-hop router, whose CS may not be fully loaded. This approach essentially utilizes Content Stores of all the routers along the forwarding path in parallel rather than checking each CS sequentially. Preliminary evaluation shows significant data plane performance improvement as 130% increase in throughput.
AB - Unlike IP router's stateless forwarding model, a content router owns a sophisticated data plane, consisting of a three-stage pipeline, namely, FIB, PIT, CS. Generally, a pipeline runs only as fast as its slowest stage. Compared with PIT and FIB, CS design is more challenging because it has more data to read/write, may have more entries in its table to store and lookup, and needs to organize content objects to sustain frequent cache replacement. To address CS's performance issue, we propose a novel mechanism called 'NB-Cache' from a network-wide point of view rather than a single router's. In NB-Cache, when packets arrive at a router whose CS is fully loaded, instead of being blocked and waiting for the CS, these packets are forwarded to the next-hop router, whose CS may not be fully loaded. This approach essentially utilizes Content Stores of all the routers along the forwarding path in parallel rather than checking each CS sequentially. Preliminary evaluation shows significant data plane performance improvement as 130% increase in throughput.
UR - http://www.scopus.com/inward/record.url?scp=85073200432&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85073200432&partnerID=8YFLogxK
U2 - 10.1109/INFCOMW.2019.8845304
DO - 10.1109/INFCOMW.2019.8845304
M3 - Conference contribution
AN - SCOPUS:85073200432
T3 - INFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019
SP - 1013
EP - 1014
BT - INFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 INFOCOM IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019
Y2 - 29 April 2019 through 2 May 2019
ER -