A few thoughts about memory cache

I was starting today to look on different approaches and techniques that are used in scalable web or non web applications. One technique used in many large systems is simply called “memory cache”. It means that data are cached in memory so the data will not be queried again.

Cache and memory cache exists for a long time; even the hardware parts link hard disks cd units have some sort of memory cache.

Why memory cache becomes so important when we talk about web applications? It’s simple. Because web applications happens to fulfill thousands of requests simultaneously. Or sometimes they have to. It’s obvious that keeping the data into memory and reuse it the next time you need it it will improve the performance. And it looks very simple. At the first view simple hashmap would do the job, unless…

There a few facts we need to be consider in real world:

  • What happens when the database is updated?
  • In most cases a web application create a thread for each http request. The same thing happens in java of php or other languages. The creation of the threads is handled by the web server, web container, and not by the code we write.
  • The scalable applications run distributed on multiple servers. If one application change the database, the cache system should be informed on all the machines.

The cache cache is very simple. If the data can be retrieved from cache it will be retrieved from cache. If not then it should be retrieved from database and put in cache:

if (data is in cache)
   retrieve data from cache
if (data not in cache)
   retrieve data from database
   add data to cache
use data

How to handle database updates?

Even if we have a simple application and we use a simple hashmap as a memory cache we still have to address this issue. When we update a database the cache should be updated with the new data or the old data should be removed from the cache.

Each time we update something in the database we have to remove the updated entities from the cache. We should take a special care because we our changes might affect other entities kept in cache:

update database  
invalidate saved and affected entities from cache

Memory Cache in Web Applications

As I said web applications are special applications. For each http request a thread will be created. In order to make sure that everything will work fine we must ensure that:
– All the http request threads will access the same data in cache. There a many ways to achieve it, singletons can be taken into discussion.
– The way the data is accessed is synchronized – one thread will not read the data while another one is updating it.

Distributed applications

The applications where memory cache is required are the applications with many users and usually they run in distributed environments. There are options here:
– each server instance will have its own memory cache. If in one instance an entity is changed the affected entities should be invalidated in the cache on all the other server instances. The server instance changing the database will have to trigger the propagation the changes to other server instances.
– the cache will run as a separate process and all the server instances will use it in common, connecting remotely to read/invalidate cached data.

Did you enjoy this tutorial? Be sure to subscribe to the my RSS feed not to miss my new posts!
... or make it popular on


  1. But what about the changes that happnes in the database how they are managed and how this is being taken care in the clustering envroment.There are some ORM solutions that care of this but what about the JDBC application and how does we handle the concurrency control in the manner.

Leave a Comment.