Memcached is a popular component of modern Web architectures, which allows fast response times-a fundamental performance index for measuring the Quality of Experience of end-users-for serving popular objects. In this work, we study how memory partitioning in Memcached works and how it affects system performance in terms of hit ratio. We first present a cost-based memory partitioning and management mechanism for Memcached that is able to dynamically adapt to user requests and manage the memory according to both object sizes and costs. We present a comparative analysis of the vanilla memory management scheme of Memcached and our approach, using real traces from a major content delivery network operator. We show that our proposed memory management scheme achieves near-optimal performance, striking a good balance between the performance perceived by end-users and the pressure imposed on back-end servers. We then consider the problem known as "calcification": Memcached divides the memory into different classes proportionally to the percentage of requests for objects of different sizes. Once all the available memory has been allocated, reallocation is not possible or limited. Using synthetic traces, we show the negative impact of calcification on the hit ratio with Memcached, while our scheme, thanks to its adaptivity, is able to solve the calcification problem, achieving near-optimal performance.

Memory Partitioning and Management in Memcached

Carra, Damiano
;
2019-01-01

Abstract

Memcached is a popular component of modern Web architectures, which allows fast response times-a fundamental performance index for measuring the Quality of Experience of end-users-for serving popular objects. In this work, we study how memory partitioning in Memcached works and how it affects system performance in terms of hit ratio. We first present a cost-based memory partitioning and management mechanism for Memcached that is able to dynamically adapt to user requests and manage the memory according to both object sizes and costs. We present a comparative analysis of the vanilla memory management scheme of Memcached and our approach, using real traces from a major content delivery network operator. We show that our proposed memory management scheme achieves near-optimal performance, striking a good balance between the performance perceived by end-users and the pressure imposed on back-end servers. We then consider the problem known as "calcification": Memcached divides the memory into different classes proportionally to the percentage of requests for objects of different sizes. Once all the available memory has been allocated, reallocation is not possible or limited. Using synthetic traces, we show the negative impact of calcification on the hit ratio with Memcached, while our scheme, thanks to its adaptivity, is able to solve the calcification problem, achieving near-optimal performance.
2019
Web architectures
performance evaluation
File in questo prodotto:
File Dimensione Formato  
carra_memcached_tc.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print
Licenza: Accesso ristretto
Dimensione 1.19 MB
Formato Adobe PDF
1.19 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1000822
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact