Google App Engine Java and GWT Application Development

Using Memcache

Memcache is one of the App Engine services. It is a volatile-memory key-value store.
It operates over all your JVM instances; as long as an item remains in Memcache, it
can be accessed by any of your application’s processes.

Memcache contents remain indefinitely if you don’t set them to expire, but can be
removed (“evicted”) by App Engine at any time. So, never count on a Memcache
entry to exist in order for your application to work correctly. The service is meant
only as a cache that allows you quicker access to information that you would
otherwise obtain from the Datastore or have to generate. Memcache is often used
both for storing copies of data objects and for storing relatively static display
information, allowing web pages to be built more quickly.

Transactions over Memcache operations are not supported—any Memcache changes
you make within a transaction are not undone if the transaction is rolled back.

The basic Memcache operations are put, get, and delete: put stores an object in the
cache indexed by a key; get accesses an object based on its cache key; and delete
removes the object stored at a given cache key. A Memcache put is atomic—the entire
object will be stored properly or not at all. Memcache also has the ability to perform
increments and decrements of cache values as atomic operations.

Objects must be serializable in order to be stored in Memcache. At the time of
writing, Memcache has a 1MB size limit on a given cached value, and data transfer
to/from Memcache counts towards an app quota (we further discuss quotas in
Chapter 11).

Memcache has two features that can be particularly useful in organizing your cached
data—the ability to define cache namespaces and the ability to define when a cache
entry expires. We’ll use these features in Connectr.

App Engine supports two different ways to access Memcache—via an
implementation of the JCache APior via an App Engine Java API. JCache is a (notyet-
official) proposed interface standard, JSR 107.

The Connectr app will use the App Engine’s Memcache API, which exposes a bit
more of its functionality.

 This page and its related links have more information on uses for
 For more information on JCache , see

Using the App Engine Memcache Java APiin Connectr

To access the Memcache service in Connectr, we will use the
appengine.api.memcache package (
To facilitate this, we’ll build a “wrapper” class, server.utils.cache.CacheSupport,
which does some management of Memcache namespaces, expiration times, and
exception handling. The code for the server.utils.cache.CacheSupport class is
as follows:

 public class CacheSupport {
 private sta tic MemcacheService cacheInit(String nameSpace){
 MemcacheService memcache =
 return memcache;
 public static Object cacheGet(String nameSpace, Object id){
 Object r = null;
 MemcacheService memcache = cacheInit(nameSpace);
 try {
 r = memcache.get(id);
 catch (MemcacheServiceException e) {
 // nothing can be done.
 return r;
 public static void cacheDelete(String namespace, Object id){
 MemcacheService memcache = cacheInit(nameSpace);
 public static void cachePutExp(String nameSpace, Object id,
 Serializable o, int exp) {
 MemcacheService memcache = cacheInit(nameSpace);
 try {
 if (exp>0) {
 memcache.put(id, o, Expiration.byDeltaSeconds(exp));
 else {
 memcache.put(id, o);
 catch (MemcacheServiceException e) {
 // nothing can be done.
 public static void cachePut(String nameSpace, Object id,
 Serializable o){
 cachePutExp(nameSpace, id, o, 0);

As seen in the cacheInit method, to use the cache, first obtain a handle to the
Memcache service via the MemcacheServiceFactory, optionally setting the
namespace to be used:

 MemcacheService memcache =

Memcache namespaces allow you to partition the cache. If the namespace is not
specified, or if it is reset to null, a default namespace is used.

Namespaces can be useful for organizing your cached objects. For example (to peek
ahead to the next section), when storing copies of JDO data objects, we’ll use the
classname as the namespace. In this way, we can always use an object’s app-assigned
String ID or system-assigned Long ID as the key without concern for key clashes.

You can reset the namespace accessed by the Memcache handle at any time
by calling:


Once set for a Memcache handle, the given namespace is used for the Memcache
APicalls. Therefore, any subsequent gets, puts, or deletes via that handle will
access that namespace.

 As this book goes to press, a new Namespace APiis now part of
 App Engine. The Namespace API supports multitenancy, allowing
 one app to serve multiple "tenants" or client organizations via the
 use of multiple namespaces to separate tenant data.
 A number of App Engine service APIs, including the Datastore and
 Memcache, are now namespace-aware, and a namespace may be set
 using a new Namespace Manager. The getMemcacheService()
 method used in this chapter, if set with a namespace, will override
 the more general settings of the Namespace Manager. So, for
 the most part, you do not want to use these two techniques
 together—that is, if you use the new Namespace APito implement
 multitenancy, do not additionally explicitly set Memcache
 namespaces as described in this chapter. Instead, leave it to the
 Namespace Manager to determine the broader namespace that you
 are using, and ensure that your cache keys are unique in a given
 "tenant" context.
 docs/java/multitenancy/overview.html provides more
 information about multitenancy.

To store an object in Memcache, call:

 memcache.put(key, value);

where memcache is the handle to the Memcache service, and both the key and the
value may be objects of any type. The value object must be serializable. The put
method may take a third argument, which specifies when the cache entry expires.
See the documentation for more information on the different ways in which
expiration values can be specified.

To retrieve an object with a given key from Memcache, call:

 Object r = memcache.get(key);

where again memcache is the handle to the Memcache service. If the object is not
found, get will return null.

To delete an object with a given key from Memcache, call:


If these operations encounter a Memcache service error, they may throw a
MemcacheServiceException. It is usually a good idea to just catch any Memcachegenerated

Thus, the cacheGet , cacheDelete , and cachePut /cachePutExp method s of
CacheSupport create a namespace-specific handler based on their namespace
argument, perform the specified operation in the context of that namespace, and
catch any MemcacheServiceExceptions thrown. The cachePutExp method takes an
expiration time, in seconds, and sets the cached object to expire accordingly.

CacheSupport requires the cache value argument to implement Serializable (if
the wrapper class had not imposed that requirement, a put error would be thrown if
the value were not Serializable).

Memcache error handlers

The default error handler for the Memcache service is the
LogAndContinueErrorHandler, which just logs service errors instead of throwing
them. The resu lt is that service errors act like cache misses. So if you use the default
error handler, MemcacheServiceException will in fact not be thrown. However,
it is possible to set your own error handler, or to use the StrictErrorHandler,
which will throw a MemcacheServiceException for any serv ice error. See the documentation (
package-summary.html) for more information.

Memcache statistics

It is possible to access statistics on Memcache use. Using the
appengine.api.memcache API, you can get information about things such as the
number of cache hits and misses, the number and size of the items currently in the
cache, the age of the least-recently accessed cache item, and the total size of data
returned from the cache.

The statistics are gathered over the service’s current uptime (and you can not explicitly
reset them), but they can be useful for local analysis and relative comparisons.

Atomic increment/decrement of Memcache values

Using the API, it is possible to perform
atomic increments and decrements on cache values. That is, the read of the value, its
modification, and the storage of the new value can be performed atomically, so that
no other process may update the value between the time it is read and the time it is
updated. Because Memcache operations cannot be a part of regular transactions, this
can be a useful feature. For example, it can allow the implementation of short-term
volatile-memory locks. Just remember that the items in the cache can be evicted by
the system at any time, so you should not depend upon any Memcache content for
the correct operation of your app.

The atomic increments and decrements are performed using the variants of the
increment() and incrementAll() method s of MemcacheService. You specify the
delta by which to increment and can perform a decrement by passing a negative delta.
See the documentation for more information.

Using Memcache with JDO data objects

One common use of the Memcache service is to cache copies of persistent data
objects in volatile memory so that you don’t always have to make a more timeconsuming
Datastore fetch to access them: you can check the cache for the object
first, and only if you have a cache miss do you need to access the Datastore. Objects
must be serializable in order to be stored in Memcache, so any such cached data
classes must implement Serializable. When storing a JDO object in Memcache, you
are essentially storing a detached copy, so be sure to prefetch any lazily loaded fields
that you want to include in the cached object before storing it.

When using Memcache to cache data objects, be aware that there is no way to
guarantee that related Memcache and Datastore operations always happen
together—you can’t perform the Memcache operations under transactional control,
and a Memcache access might transiently fail (this is not common, but is possible),
leaving you with stale cached data.

Thus, it is not impossible for a Memcache object to get out of sync with its Datastore
counterpart. Typically, the speedup benefits of using Memcache far outweigh
such disadvantages. However, you may want to give all of your cached objects an
expiration date. This helps the cache “re-sync” after a period of time if there are any

The pattern of cache usage for data objects is typically as follows, depending upon
whether or not an object is being accessed in a transaction.

  • Within a transaction

    When accessing an object from within a transaction, you should not use the
    cached version of that object, nor update the cache inside the transaction.
    This is because Memcache is not under transactional control. If you were to
    update the cache within a transactional block, and then the transaction failed
    to commit, the Memcache data would be inconsistent with the Datastore. So
    when you access objects inside a transaction, purge the cache of these objects.

    Post-transaction, you can cache a detached copy of such an object, once you
    have determined that the commit was successful.

  • Outside a transaction

    If a Datastore access is not under transactional control, this means that it is
    not problematic to have multiple processes accessing that object at the same
    time. In that case, you can use Memcache as follows:

    When reading an object: first check to see if the object is in the cache; if not,
    then fetch it from the Datastore and add it to the cache.

    When creating or modifying an object: save it to the Datastore first, then
    update the cache if the Datastore operation was successful.

    When deleting an object: delete from the cache first, then dele t e from
    the Datastore.

In all cases, be sure to catch any errors thrown by the Memcache service so that they
do not prevent you from doing your other work.

When using Memcache to store data objects, it can be useful to employ some form of
caching framework, so that you do not have to add object cache management code
for every individual method and access. In the next section, we will look at one way
to do this—using capabilities provided by JDO.

GWT Articles & Books

Pages: 1 2 3 4

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This

Share this post with your friends!