Category: Scalability

Using Memcached with Java

August 10, 2009

Why not JBoss Cache?
By default, if you are looking for a caching solution for your Java based enterprise application, the tendency is to go with Java Caches. I have been using JBoss Cache for a couple of years now. It is a very powerful smart cache, which provides clustering, synchronized replication and transaction support. Meaning, given a cluster of JBoss cache, each instance is aware of the others and will be kept in sync. That way, if one of the instance is down, other instances still be serving your data.

Having been plagued with memory problems over and over again, I finally gave up on JBoss Cache and decided to go with a a simple and dumber solution – Memcached.

Memcached is widely popular esp. in the PHP and Rails community. My main reasons for switching from JBoss Cache to Memcached are:

1. JBoss Cache is replicated, so there is the overhead of syncing the nodes. All the nodes try to keep the same state. Memcached is distributed and each node is dumb about the other nodes. Each piece of data lives in only one of the nodes. And the nodes don’t know about each other. If one node fails, only some hits are missed. While this may seem like a disadvantage, it is actually a blessing if you are willing to give up the complexity for simplicity and ease of maintenance.

2. JBoss cache comes with a pretty complicated configuration. Memcached doen’t require any configuration.

3. JBoss Cache lives in your JVM, and you have to tune the JVM for optimum memory, which isnt always fun as the nature and amount of your data changes . Memcached uses the amount RAM you specify. If the memory becomes full, it will evict older data based on LRU.

In short, the fact that Memcached is so simple and requires almost no maintenance was a big big win for me. However, if your application is such that the sophisticated caches makes sense, you should definitely consider using them.


Memcached server (protocol defined here) is an in memory cache that stores anything from binary to text to primitives associated with a key as a Key-Value pair. Like with any other caches, storing data in memory prevents you from going to the database or fileserver or any backend system everytime a user requests for the data. That saves a lot of load of your backend systems, leading to higher scalability. Since the data is stored in memory, it is generally faster than making an expensive backend call too.

However, Memcached is not a persistent store, and doesn’t guarantee something will be in the cache just because you stored it. So you should never rely on the fact that Memcached is storing your data. Memcached should strictly be used for caching purposes only, and not for reliable storage.

The only limitation with Memcached (that you need to be aware of) is that the key in memcached should be less that 255 chars and each value shouldn’t exceed 1 MB.

1. Install Libevent
Memcached uses the Libevent library for network IO.

$ cd libevent-1.4.11-stable
$ autoconf
$ ./configure --prefix=/usr/local
$ make
$ sudo make install

2. Install Memcached:
Download the latest version of Memcached from who developed Memcached originally for Livejournal.

$ cd memcached-1.4.0
$ autoconf
$ ./configure --prefix=/usr/local
$ make
$ sudo make install

3. Run memcached:
Start memcached as a daemon with 512MB of memory on port 11211(default). Then you can telnet to the server and port and use any of the available commands.

$memcached -d -m 512 -p 1121

$ telnet localhost 11211
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get joe
set joe 0 3600 10  (Note: TTL 3600 and 10 bytes)
get joe
VALUE joe 0 10

Spy Memcached (Memcached Java Client):
Basic Usage:

There are a few good java clients for Memcached. I briefly looked at the Whalin’s Memcached Client and Dustin’s SpyMemcached Client, and decided to go with the latter for minor reasons.You can start with the API as shown in the docs:

MemcachedClient c=new MemcachedClient(new InetSocketAddress("", 11211));
c.set("someKey", 3600, someObject);
Object myObject=c.get("someKey");

The MemcachedClient is a single-threaded client to each of the Memcached server in the pool. The set method sets an object in the cache for a given key. If a value already exists for the key, it overwrites the value. It takes a timeToLive value in seconds, which is the expiration date for the object. Even though there are many requests comings, the client handles only one thread at a time, while the rest wait in the queue. The get method retrieves the object based on the unique queue, and the delete method is used to delete the value.

There are other methods available for storage, retrieval and update but you will get by most of the times just with the three methods get, set and delete.


By design, memcached Server doesn’t have any authentication around it. So its your job to secure the memcached server or the port from outside network. Furthermore just to obscure the key, you can prefix your key with some secret code or use the hash of the key as the key.

For example:

String randomCode = "aaaaaaaaaaaaaaaaaaaa";
c.set(randomCode + "someKey", 3600, someObject);
Object myObject=c.get(randomCode + "someKey");

Adding/Removing a cache server:

If you need to upscale and want to add a new memcached server, you just need to add the server ip and port to the pool of existing servers, and the memcached client will take it into account. If you want to downscale and get rid of a server, just remove the server from the pool. There will be cache misses for the data living on the server for a while, but cache will soon recover itself as it will starting caching the data onto other available servers. Same thing will happen if you lose connectivity to one of the servers. If you are worried about flooding the database when you lose a memcached server, you should have the data pre-fetched onto another server. However, the memcached server themselves don’t know anything about each others. Its all the function of the client.

MemcachedClient c =  new MemcachedClient(new BinaryConnectionFactory(),
                        AddrUtil.getAddresses("server1:11211 server2:11211"));

Connection Pooling:

The MemcachedClient establishes TCP connection (Facebook has released a modified version of memcached to use UDP to reduce the number of connections) open to the memcached server.So you might want to know how many connections are being used.

$ netstat -na | grep 11211
tcp4       0      0        ESTABLISHED
tcp4       0      0        ESTABLISHED

There is really no way to explicitly close the TCP connections. However, each get or set is atomic in itself. There is no really harm to opening as many TCP connections as you like as Memcached is designed to work well with large number of open connections.
Update: As Matt said in the comments below, each connection is is asynchronous and non-blocking. So while there is one thread per node, requests are multiplexed. So there is very little benefit to doing explicit pooling with multiple copies of the client connections.

MyCache Singleton:

So with all the changes, here’s what my wrapper around MemcachedClient looks like:

import net.spy.memcached.AddrUtil;
import net.spy.memcached.BinaryConnectionFactory;
import net.spy.memcached.MemcachedClient;

public class MyCache {
	private static final String NAMESPACE= "SACHARYA:5d41402abc4b2a76b9719d91101";
	private static MyCache instance = null;
	private static MemcachedClient[] m = null;
	private MyCache() {
		MemcachedClient c =  new MemcachedClient(
                         new BinaryConnectionFactory(),
                return c;
	public static synchronized MyCache getInstance() {
		System.out.println("Instance: " + instance);
		if(instance == null) {
			System.out.println("Creating a new instance");
			instance = new MyCache();
	     return instance;
	public void set(String key, int ttl, final Object o) {
		getCache().set(NAMESPACE + key, ttl, o);
	public Object get(String key) {
		Object o = getCache().get(NAMESPACE + key);
        if(o == null) {
        	System.out.println("Cache MISS for KEY: " + key);
        } else {
            System.out.println("Cache HIT for KEY: " + key);
        return o;
	public Object delete(String key) {
		return getCache().delete(NAMESPACE + key);	
	public MemcachedClient getCache() {
		MemcachedClient c= null;
		try {
			int i = (int) (Math.random()* 20);
			c = m[i];
		} catch(Exception e) {
		return c;

In the above code:
1. I am using the BinaryConnectionFactory (which is a new feature) that implements the new binary wire protocol which provides more efficient way of parsing the text.

2. MyCache is a singleton, and it sets up 21 connections when it is instantiated.

3. My keys are of the format: SACHARYA:5d41402abc4b2a76b9719d91101:key where SACHARYA is my domain. That way I can use the same memcached server to store data for two different applications. The random staring 5d41402abc4b2a76b9719d911017c592 is just for some security through obscurity which we discussed above. Finally the key would be something like userId or username or a sql query or any string that uniquely identifies the data to be stored.

Sample Use:

Generally you can use caching wherever there is bottleneck. I use it at the Data Access Layer layer for saving myself from making a database or a webservice call. If there is a computation-heavy business logic, I cache the output at the business layer. Or you can cache at the presentation layer. Or you can cache at every layer. It all depends on what you are trying to achieve.

public List<Product> getAllProducts() {
        List<Product> products = (List<Product>) MyCache.getInstance().get("AllProducts");
        if(products != null) {
              return products;
        products = getAllProductsFromDB()
        if(products) {
              MyCache.getInstance().put("AllProducts", 3600, customer);
        return products;

public void updateProduct(String id) {
public void deleteProduct(String id) {

Warming the Cache:

When the application is first started, there is nothing in the cache. So you might want to pre-warm the cache through a job scheduler, just to avoid large no of backend calls at once. I generally like to put this piece put outside of the application itself. It could be a separate app in itself where you prewarm the cache based on the hit-list of keys.

Measuring Cache Effectiveness:

The stats command provides important information about how your cache is performing. Among other parameters, it provides the total get request and how many were hit and missed.

$ telnet localhost 11211
STAT cmd_get 13219
STAT get_hits 12232
STAT get_misses 512

This means of total 13219 cache requests, it came back with results for 12232, resulting in 12232/13210=92.5% of cache hit, which isn’t that bad.

Now once you have a general idea of your cache hit rate, you can improve it even further by logging which particular requests were missed and optimizing them over time.

You can get the memory stats by using command “stats slabs” or you can invalidate items in cache using “flush all”.


You should never rely on your cache only though. If you somehow lost connectivity to your caching server, the application should perform exactly the same. You should use caching only for scalability and/or speed. Implementing the cache itself is pretty simple. The difficult part is which data to cache, how long to cache, when to invalidate the cache, when to update stale data, and how to prevent the database being flooded once the cache is invalidated. This is something that depends on the nature of your data, how fresh you want it and how you update it. You should keep on measuring the stats and gradually improve the effectiveness over time.