摘要
Both resource efficiency and application QoS have been big concerns of datacenter operators for a long time,but remain to be irreconcilable.High resource utilization increases the risk of resource contention between co-located workload,which makes latency-critical(LC)applications suffer unpredictable,and even unacceptable performance.Plenty of prior work devotes the effort on exploiting effective mechanisms to protect the QoS of LC applications while improving resource efficiency.In this paper,we propose MAGI,a resource management runtime that leverages neural networks to monitor and further pinpoint the root cause of performance interference,and adjusts resource shares of corresponding applications to ensure the QoS of LC applications.MAGI is a practice in Alibaba datacenter to provide on-demand resource adjustment for applications using neural networks.The experimental results show that MAGI could reduce up to 87.3%performance degradation of LC application when co-located with other antagonist applications.
基金
This work is supported in part by the National Key Research and Development Program of China under Grant No.2016YFB1000201
the National Natural Science Foundation of China under Grant Nos.61420106013 and 61702480
the Youth Innovation Promotion Association of Chinese Academy of Sciences and Alibaba Innovative Research(AIR)Program.