Files

Abstract

We present a model for predictive caching where a shared cache is used to improve performance across a grid. Unlike local caching mechanisms, shared, grid or cloud-based caches incur high costs or latency associated with the additional data transfer. Our proposed caching model, which is dynamically optimized and constantly updated over time, determines the optimal allocation of objects into the shared cache, in such a way that the total cost or latency is minimized. This is achieved by including in the caching algorithm design measures of grid latency, data retrieval costs and a predictive component based on the probability of cached objects being requested in the near future.

Details

Actions

PDF