摘要
互联网巨大的流量增长促使信息由中心网络ICN架构的提出,以更好的满足用户的需求。ICN网络中无处不在的缓存是保障用户体验的关键技术。然而多数研究者关注的是ICN缓存放置问题,缓存替换策略仍然沿用Web网页缓存时代的经典算法,这些算法在ICN网络场景中和随机缓存替换策略的性能几乎没有区别。受到集成学习对于性能增强的启发,本文提出一种融合缓存替换模型的方法。缓存替换算法通过保留合适的内容,在减少延时方面发挥重要作用。因此,需要长时间获取的内容优先保存在缓存中。本文中,我们引入延时敏感和内容最近访问频率的2种替换模型进行筛选值得长时间驻留在缓存空间的内容,并通过线性组合的方式将其融合成一个模型。实验表明,我们的融合策略相比经典的替换策略,具有更高的缓存命中率,并且明显降低了用户使用延时。
The huge increase in Internet traffic has prompted the proposal of an information-centric network(ICN) architecture to better meet the needs of content providers and users. The ubiquitous cache in the ICN network is a key technology to ensure user experience. However, most researchers are concerned about how to place the cache in the ICN network. The cache replacement strategy still uses the classic algorithm of the web page cache era. The performance of these cache replacement algorithms in ICN scenarios is almost the same as that of random cache replacement strategies. Inspired by ensemble learning for performance enhancement, this paper proposes a method to fuse cache replacement models. Cache replacement algorithms play an important role in reducing latency by retaining suitable content. Therefore, content that needs to be fetched for a long time should be preferentially stored in the cache. In this paper, we introduce two replacement models of latency-sensitive and content-recent access frequency to filter content worthy of staying in the cache space for a long time, and fuse them into one model by linear combination. Experiments show that our fusion strategy has a higher cache hit rate than the classical replacement strategy, and significantly reduces user latency.
作者
周天驰
孙鹏
刘春梅
ZHOU Tianchi;SUN Peng;LIU Chunmei(National Network New Media Engineering Research Center,Institute of Acoustics,Institute of Acoustics,Chinese Academy of Sciences,Beijing,100190,China;University of Chinese Academy of Sciences,Beijing,100049,China)
出处
《网络新媒体技术》
2022年第5期33-40,共8页
Network New Media Technology
基金
面向信息中心网络的传输技术研究(编号:E1551802)。
关键词
信息中心网络
缓存替换
模型集成
网内缓存
information-centric network
cache replacement
model integration
in-networking caching