Reducing Redundant Data in SDN-based NDN using Single-State Q-learning
Abstract
Software-defined networking (SDN), a cornerstone of future-generation networks, is adopted in Named Data Networks (NDN) for largescale deployment. The forwarding strategies proposed for SDN-based NDN primarily use the centralized controller to optimize Interest forwarding and Data delivery. The nodes direct the Interests to the controller to discover the content source/s and suppress the suboptimal responses. To support such content discovery and delivery, the controller experiences frequent path calculation and trades excessive control messages to install the paths in the nodes due to rapid cache admission and replacement. Besides, the typical NDN forwarding solutions are not viable to realize or need considerable modifications in SDN-based NDN. To that end, the proposed strategy optimizes Interest forwarding and Data delivery using a Single-State Q-learningbased technique, namely SDN-Q. In SDN-Q, each content source learns to suppress the sub-optimal responses, with the learning task offloaded to the controller. The controller communicates the learning decision to the nodes. Each node only retains the action (decision) to entertain an incoming Interest. Once an Interest hits, the source either replies with the Data or remains silent and sends the Interest’s information (meta-data) to the controller for the learning task. Thus, SDN-Q enables the NDN nodes to remain light-loaded. Each node can instantly answer an Interest request without redirecting it to the controller. Additionally, Interest forwarding using a hop-based scoped-flooding approach has been optimized. The proof-of-concept (POC) implementation reveals that the proposed system outperforms the competing strategies by reducing the traffic load, latency, and control messages in SDN-based NDN at most by 40%, 7%, and four times respectively, without negotiating packet delivery ratio.
Collections
- M.Sc Thesis/Project [150]