ONJava.com    
 Published on ONJava.com (http://www.onjava.com/)
 See this if you're having trouble printing code examples


Two Servlet Filters Every Web Application Should Have

by Jayson Falkner
11/19/2003

Almost every single web application you will ever make will seriously benefit from using servlet filters to both cache and compress content. A caching filter optimizes the time it takes to send back a response from your web server, and a compression filter optimizes the size of the content that you send from your web server to a user via the Internet. Since generating content and sending content over the World Wide Web are the bread and butter of web applications, it should be no surprise that simple components that aid in these processes are incredibly useful. This article details the process of building and using a caching filter and a compression filter that are suitable for use with just about any web application. After reading this article, you will understand caching and compressing, have code to do both, and be able to apply caching and compression to any of your future (or existing!) web applications.

Review: Servlet Filters in 10 Sentences

Servlet filters are powerful tools that are available to web application developers using the Servlet 2.3 specification or above. Filters are designed to be able to manipulate a request or response (or both) that is sent to a web application, yet provide this functionality in a method that won't affect servlets and JSPs being used by the web application unless that is the desired effect. A good way to think of servlet filters is as a chain of steps that a request and response must go through before reaching a servlet, JSP, or static resource such as an HTML page in a web application. Figure 1 shows the commonly used illustration of this concept.

Figure 1
Figure 1. The servlet filter concept

The large gray box is a web application that has some endpoints, such as JSP, and some filters applied to intercept all requests and responses. The filters are shown in a stack, three high, that each request and response must pass through before reaching an endpoint. At each filter, custom Java code would have a chance to manipulate the request or response, or anything that has to do with either of those objects.

Related Reading

Java Servlet & JSP Cookbook
By Bruce W. Perry

Understand that a user's request for a web application resource can be forced to go through any number of filters, in a given order, and any of the filters may manipulate the request, including stopping it altogether, and respond in a variety of different ways. This is important to understand because later in this article, two filters will be presented that manipulate the HttpServletRequest and HttpServletResponse objects to provide some very convenient functionality. Don't worry if you don't know anything about coding a filter -- it would certainly help if you understood the code, but in the end of the article, all of the code is provided in a JAR that can easily be used without knowing a thing about how it was made.

Before moving on, if you would like to learn more about the basics of servlet filters, I suggest checking out Servlets and JavaServer Pages; the J2EE Web Tier. It is a Servlet 2.4 and JSP 2.0 book I co-authored with Kevin Jones, and the book provides complete coverage of servlets and servlet filters, including the two filters presented later in this article. It would be nice if you bought the book, but the chapters on servlets and filters will soon be available for free on the book support site -- if they're online already, read away.

Compressing Content Using a Servlet Filter

Compression is the act of removing redundant information, representing what you want in as little possible space. It is incredibly helpful for sending information across the World Wide Web, because the speed at which people get information from a web application is almost always dependent on how much information you are trying to send. The smaller the size of your information, the faster it can all be sent. Therefore, if you compress the content your web application generates, it will get to a user faster and appear to be displayed on the user's screen faster. In practice, simply applying compression to a decent-sized web page often results in saving several seconds of time.

Now, the theory is nice, but the practice is nicer. This theoretical compression isn't something you have to labor over each time you go to code a servlet, JSP, or any other part of a web application. You can obtain very effective compression by having a servlet filter conditionally pipe whatever your web application produces to a GZIP-compressed file. Why GZIP? Because the HTTP protocol, the protocol used to transmit web pages, allows for GZIP compression. Why conditionally? Because not every browser supports GZIP compression, but almost every single modern web browser does. If you blindly send GZIP-compressed content to an old browser, the user might get nothing but gibberish. Since checking for GZIP compression support is nearly trivial, it is no problem to have a filter send GZIP-compressed content to only those users that can handle it.

Source Code

Download jspbook.zip for all of the source code found in this article. Also download jspbook.jar for the ready-to-use JAR with compiled versions of both the cache and compression filter.

I'm saying this GZIP compression stuff is good. But how good? GZIP compression will usually get you around a 6:1 compression ratio; it depends on how much content you are sending and what the content is. In practice, this means you will send content to a user up to six times faster if you simply use GZIP compression whenever you can. The only trick is that you need to be able to convert normal content in to GZIP-compressed content. Thankfully, the standard Java API provides code for doing exactly this: the java.util.zip package. The task is as easy as sending output sent in a web application's response conditionally through the java.util.zip.GZIPOutputStream class. Here is some code for doing exactly that.

As with most every filter, three classes are needed to do the job. A customized implementation of the javax.servlet.Filter interface, a customized implementation of the javax.servlet.ServletOutputStream class, and a customized implementation of the javax.servlet.http.HttpServletResponse class. Full source code for these three classes is provided at the end of the article; for now I will focus only on the relevant code. First, a check needs to be made if a user has support for GZIP-compressed content. This check is best done in the implementation of the Filter class.

...
public class GZIPFilter implements Filter {

  // custom implementation of the doFilter method
  public void doFilter(ServletRequest req,
                       ServletResponse res,
                       FilterChain chain)
      throws IOException, ServletException {
      
    // make sure we are dealing with HTTP
    if (req instanceof HttpServletRequest) {
      HttpServletRequest request =
          (HttpServletRequest) req;
      HttpServletResponse response =
          (HttpServletResponse) res;
      // check for the HTTP header that
      // signifies GZIP support
      String ae = request.getHeader("accept-encoding");
      if (ae != null && ae.indexOf("gzip") != -1) {
        System.out.println("GZIP supported, compressing.");
        GZIPResponseWrapper wrappedResponse =
          new GZIPResponseWrapper(response);
        chain.doFilter(req, wrappedResponse);
        wrappedResponse.finishResponse();
        return;
      }
      chain.doFilter(req, res);
    }
  }

Information about GZIP support is conveyed using the accept-encoding HTTP header. This header can be accessed using the HttpServletRequest object's getHeader() method. The conditional part of the code need be nothing more than an if statement that either sends the response as is or sends the response off to be GZIP compressed.

The next important part of the GZIP filter code is compressing normal content with GZIP compression. This code occurs after the above filter has found that the user does know how to handle GZIP-compressed content, and the code is best placed in a customized version of the ServletOutputStream class. Normally, the ServletOutputStream class handles sending text or non-text content to a user while ensuing appropriate character encoding is used. However, we want to have the ServletOutputStream class send content though a GZIPOutputStream before sending it to a client. This can be accomplished by overriding the write() methods of the ServletOutputStream class to GZIP content before sending it off in an HTTP response.

...
  public GZIPResponseStream(HttpServletResponse response)
      throws IOException {
    super();
    closed = false;
    this.response = response;
    this.output = response.getOutputStream();
    baos = new ByteArrayOutputStream();
    gzipstream = new GZIPOutputStream(baos);
  }
...
  public void write(int b) throws IOException {
    if (closed) {
      throw new IOException(
          "Cannot write to a closed output stream");
    }
    gzipstream.write((byte)b);
  }

  public void write(byte b[]) throws IOException {
    write(b, 0, b.length);
  }

  public void write(byte b[], int off, int len)
      throws IOException {
    System.out.println("writing...");
    if (closed) {
      throw new IOException(
          "Cannot write to a closed output stream");
    }
    gzipstream.write(b, off, len);
  }
...

There are also a few loose ends to tie up, such as setting the content's encoding to be the MIME type for GZIP, and ensuring the subclass of ServletOutputStream has implementations of the flush() and close() methods that work with the changes to the write() methods. However, all of these changes are minor, and you may see code that does them by looking at the source code provided at the end of this article. The most important point to understand is that the filter has altered the response to ensure that all content is GZIP compressed.

Test out the above code by grabbing a copy of jspbook.jar, which includes compiled classes of the GZIP filter, and putting the JAR in the WEB-INF/lib directory of your favorite web application. Next, deploy the com.jspbook.GZIPFilter class to intercept all requests to resources ending with ".jsp" or anything that produces HTML. Reload the web application for the changes to take effect. The GZIP filter should now be automatically compressing all responses, if the user's browser supports GZIP compression.

Unfortunately, there is really no good way you can tell if content displayed by your browser was GZIP compressed or not. Part of the process working is that it is unnoticeable if the content was compressed or not. In order to test out the compression, we need to spoof some HTTP requests and see what exactly is returned.

This can be done relatively easily by tossing some code in a JSP; here is such a JSP.

<%@ page import="java.util.*,
                 java.net.*,
                 java.io.*" %>
<%
String url = request.getParameter("url");
if (url != null) {
  URL noCompress = new URL(url);
  HttpURLConnection huc =
   (HttpURLConnection)noCompress.openConnection();
  huc.setRequestProperty("user-agent",
                         "Mozilla(MSIE)");
  huc.connect();
  ByteArrayOutputStream baos =
    new ByteArrayOutputStream();
  InputStream is = huc.getInputStream();
  while(is.read() != -1) {
    baos.write((byte)is.read());
  }
  byte[] b1 = baos.toByteArray();

  URL compress = new URL(url);
  HttpURLConnection hucCompress =
   (HttpURLConnection)noCompress.openConnection();
 hucCompress.setRequestProperty("accept-encoding",
                                 "gzip");
  hucCompress.setRequestProperty("user-agent",
                                 "Mozilla(MSIE)");
  hucCompress.connect();
  ByteArrayOutputStream baosCompress =
    new ByteArrayOutputStream();
  InputStream isCompress =
    hucCompress.getInputStream();
  while(isCompress.read() != -1) {
    baosCompress.write((byte)isCompress.read());
  }
  byte[] b2 = baosCompress.toByteArray();
  request.setAttribute("t1",
                       new Integer(b1.length));
  request.setAttribute("t2",
                       new Integer(b2.length));
}
request.setAttribute("url", url);
%>
<head>
  <title>Cache Test</title>
</head>
<body>
<h1>Cache Test Page</h1>
Enter a URL to test.
<form method="POST">
<input name="url" size="50">
<input type="submit" value="Check URL">
</form>
 <p><b>Testing: ${url}</b></p>
 Request 1: ${t1} bytes<br/>
 Request 2: ${t2} bytes<br/>
 Space saved: ${t1-t2} bytes
   or ${(1-t2/t1)*100}%<br/>
</body>
</html>

Save the above JSP in the same web application, and browse to the page. You will see an HTML form that requests a URL; it will resemble the following:

Figure 2
Figure 2. Blank compression test page

The JSP works by taking a URL and spoofing two HTTP requests to that URL. One request spoofs an HTTP request that doesn't accept GZIP content. The other request spoofs an HTTP request that does accept GZIP-compressed content. Compression is quantified by counting the number of bytes each request returned. If the given URL (i.e., web app) provides support for GZIP compression, the second request should be smaller. Try out the JSP by filling in the form; any URL will do, but something you know has GZIP support is ideal, say http://www.jspbook.com. Here are the results for supplying http://www.jspbook.com as the URL.

Figure 3
Figure 3. Compression test page, using the news page of http://www.jspbook.com

The compression saved just over 60%. That means the compressed content was sent roughly 60% faster to my web browser, allowing it to start rendering the page 60% sooner. The end result is that a user would probably see this page about twice as fast as a non-GZIP-compressed page. However, the benefit is not only on the user's end. The web application sending this page is sitting on a server(s) and that server(s) only has so much bandwidth to work with. Reducing the number of bytes you need to transmit per page also means you can have one server sending out more pages using the same amount of bandwidth. The tradeoff is processing power (running the compression algorithm) versus the number of bytes you need to send to a user. In almost every case, your server will have a far faster processor than a network card, and it is well worth always trying to compress content -- especially if you also cache compressed content (we'll see how to do this a little later).

Compression Filtering Tips

You should almost always try to compress content. Compression is good for the bandwidth of both the end user and your server. However, like most things, it pays to not blindly compress everything your web application produces. Compression works by eliminating redundancies in content. Text content (basically anything a JSP produces) can usually be compressed quite well, but already compressed content (e.g., a JPG image) or randomized content (e.g., anything encrypted) cannot be so easily compressed. In the latter case of already compressed or randomized content, it often does not pay to attempt to apply something such as a GZIP filter. You will spend a noticeable amount of processing power to achieve an unnoticeable amount of compression. For general use, only apply a compression filter to any resources in the web application that are text-producing, namely all JSP and HTML pages. Using the code deploying compression presented by this article is as easy as putting jspbook.jar in a web application's WEB-INF/lib directory and adding the following lines to web.xml.

  <filter>
    <filter-name>Compress</filter-name>
    <filter-class>com.jspbook.GZIPFilter</filter-class>
  </filter>

  <filter-mapping>
    <filter-name>Compress</filter-name>
    <url-pattern>*.jsp</url-pattern>
  </filter-mapping>
  <filter-mapping>
    <filter-name>Compress</filter-name>
    <url-pattern>*.html</url-pattern>
  </filter-mapping>

Not bad at all! Especially considering you can add the given compression filter to just about any web application you have ever made and you will reap the rewards of compression instantly. Note: the given code is completely free for use both commercially and non-commercially, and you need not give any credit or reference to where you got the code. However, if you like how it works, hopefully you will pick up a copy of Servlets and JSP; the J2EE Web Tier and encourage more code like it to be made.

Another point to always remember about compression filters, especially GZIP, is that not all web browsers are able to understand the format. Be sure to always explicitly check, as the above code did, before sending back compressed content. While this may seem intuitive, it is really easy to forget this point when working with more than one filter. For example, next, a cache filter is introduced that keeps a copy of a response so that it can be reused for other users that later request the same resource. If you were to place the cache filter before the compression filter, then you might cache a GZIP-compressed response. Next time someone requested the same resource. the cached copy would be returned by the cache filter -- without performing a check to see if the user supports GZIP compression. Trouble would ensue if the cache filter ended up returning GZIP-compressed content to a browser that didn't understand the GZIP format.

The take-home message is that a GZIP compression filter (or any compression filter, for that matter) is a powerful tool that is easy to use and that you should almost always use. By compressing content, you can optimize the amount of bytes that need be sent across the World Wide Web, resulting in benefits for both your server and the user.

Caching Content Using a Servlet Filter

The second filter this article addresses is a cache filter. Caching is helpful because it saves time and processing power. The basic idea is that it takes time for a web application to generate content, and in many situations, the content won't change between different requests to a particular servlet or JSP. Therefore, if you simply save the exact output (e.g., HTML) that is produced for a given URI, you can recycle this content several times before having the web application generate it again. Assuming your cache is faster then the web application -- it almost always is -- the end result is that you save a large amount of the time and processing power required to generate a dynamic response. Currently, there is no official standard for caching web application content. However, building a simple, generic caching system is a straightforward process.

We will now begin to discuss building a simple cache filter. In general, caching at the filter level is most helpful, as it allows you to save the entire response any particular JSP or servlet generates. However, it is worth considering that you can certainly try to cache elsewhere; for instance, using a set of custom tags that auto-cache any content placed between them, or using a custom Java class to cache information retrieved from a database. Caching possibilities are endless, but for practical purposes we shall focus on implementing caching at the filter level.

Before seeing some code, let's make sure what I mean by "caching at the filter level" is clear. "Caching at the filter level" simply means using a standard servlet filter that will intercept all requests to a web application and attempt to intelligently use the cache. Should a valid cached copy of content exist in a cache, the filter will immediately respond to the request by sending a copy of the cache. However, if no cache exists, the filter will pass the request on to its intended endpoint, usually a servlet or JSP, and the response will be generated as it normally is. Once a response is successfully generated, it will also be cached, so that on future requests to the same resource, the cache may be used.

Understand that as this filter is intended to be used on an entire web application, it can cache all of the various responses from different servlets and JSPs. Think about how this is possible: each servlet or JSP will likely produce a different response. The filter will need to be able to distinguish between different responses, store the appropriate content somewhere, and correctly match a cached copy of the content to an incoming request. Doing all of this is no problem at all -- different requests can almost always be distinguished by the requested URI, and the same information can be used to identify cached resources. Cached content can be stored either in memory, on the hard disk, or via any other method your server allows for -- usually, the hard disk is a great solution. With all of that said, here is the code for a filter that caches content in the web application's temporary directory. The full code is given below, and important parts of the code are highlighted after the listing.

package com.jspbook;

import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;
import java.util.Calendar;

public class CacheFilter implements Filter {
  ServletContext sc;
  FilterConfig fc;
  long cacheTimeout = Long.MAX_VALUE;

  public void doFilter(ServletRequest req,
                       ServletResponse res,
                       FilterChain chain)
      throws IOException, ServletException {
    HttpServletRequest request =
        (HttpServletRequest) req;
    HttpServletResponse response =
        (HttpServletResponse) res;

    // check if was a resource that shouldn't be cached.
    String r = sc.getRealPath("");
    String path = 
        fc.getInitParameter(request.getRequestURI());
    if (path!= null && path.equals("nocache")) {
      chain.doFilter(request, response);
      return;
    }
    path = r+path;

    String id = request.getRequestURI() + 
        request.getQueryString();
    File tempDir = (File)sc.getAttribute(
      "javax.servlet.context.tempdir");

    // get possible cache
    String temp = tempDir.getAbsolutePath();
    File file = new File(temp+id);

    // get current resource
    if (path == null) {
      path = sc.getRealPath(request.getRequestURI());
    }
    File current = new File(path);

    try {
      long now =
        Calendar.getInstance().getTimeInMillis();
      //set timestamp check
      if (!file.exists() || (file.exists() &&
          current.lastModified() > file.lastModified()) ||
          cacheTimeout < now - file.lastModified()) {
        String name = file.getAbsolutePath();
        name =
            name.substring(0,name.lastIndexOf("/"));
        new File(name).mkdirs();
        ByteArrayOutputStream baos =
            new ByteArrayOutputStream();
        CacheResponseWrapper wrappedResponse =
          new CacheResponseWrapper(response, baos);
        chain.doFilter(req, wrappedResponse);

        FileOutputStream fos = new FileOutputStream(file);
        fos.write(baos.toByteArray());
        fos.flush();
        fos.close();
      }
    } catch (ServletException e) {
      if (!file.exists()) {
        throw new ServletException(e);
      }
    }
    catch (IOException e) {
      if (!file.exists()) {
        throw e;
      }
    }

    FileInputStream fis = new FileInputStream(file);
    String mt = sc.getMimeType(request.getRequestURI());
    response.setContentType(mt);
    ServletOutputStream sos = res.getOutputStream();
    for (int i = fis.read(); i!= -1; i = fis.read()) {
      sos.write((byte)i);
    }
  }

  public void init(FilterConfig filterConfig) {
    this.fc = filterConfig;
    String ct =
        fc.getInitParameter("cacheTimeout");
    if (ct != null) {
      cacheTimeout = 60*1000*Long.parseLong(ct);
    }
    this.sc = filterConfig.getServletContext();
  }

  public void destroy() {
    this.sc = null;
    this.fc = null;
  }
}

First note that the code is part of the com.jspbook package. This code is the Servlet-2.4-compliant cache filter that is detailed in the book. It is tested code that is used in several web applications, and is maintained at the book's support site, http://www.jspbook.com. This is no contrived example; it is serious code.

The next thing I'd like to draw attention to is how the servlet identifies caches and saves them to the local hard disk. As mentioned before the code, the filter uses the request URI and any parameters in the query string to generate a unique name for the cache.

String id = request.getRequestURI()+request.getQueryString();

Once the filter has this unique name, it uses the name to check if the resource exists in the web application's cache. If it does, the cached copy is sent and the filter does not pass the request and response down the filter chain. If no cache exists, the filter passes the request and response down the filter chain so that the desired JSP or servlet can generate a response. Once the response is made, the cache filter sends it to the client and makes a copy of the response in the web application's cache.

// use the web applications temporary work directory
File tempDir =
    (File)sc.getAttribute("javax.servlet.context.tempdir");

// look to see if a cached copy of the response exists
String temp = tempDir.getAbsolutePath();
File file = new File(temp+id);

// get a reference to the servlet/JSP
// responsible for this cache
if (path == null) {
  path = sc.getRealPath(request.getRequestURI());
}
File current = new File(path);

// check if the cache exists and is newer than the
// servlet or JSP responsible for making it. 
try {
  long now = Calendar.getInstance().getTimeInMillis();
  //set timestamp check
  if (!file.exists() || (file.exists() &&
      current.lastModified() > file.lastModified()) ||
      cacheTimeout < now - file.lastModified()) {

    // if not, invoke chain.doFilter() and
    // cache the response
    String name = file.getAbsolutePath();
    name = name.substring(0,name.lastIndexOf("/"));
    new File(name).mkdirs();
    ByteArrayOutputStream baos =
        new ByteArrayOutputStream();
    CacheResponseWrapper wrappedResponse =
      new CacheResponseWrapper(response, baos);
    chain.doFilter(req, wrappedResponse);

    FileOutputStream fos = new FileOutputStream(file);
    fos.write(baos.toByteArray());
    fos.flush();
    fos.close();
  }
} catch (ServletException e) {
  if (!file.exists()) {
    throw new ServletException(e);
  }
}
catch (IOException e) {
  if (!file.exists()) {
    throw e;
  }
}

// return to the client the cached resource.
FileInputStream fis = new FileInputStream(file);
String mt = sc.getMimeType(request.getRequestURI());
response.setContentType(mt);
ServletOutputStream sos = res.getOutputStream();
for (int i = fis.read(); i!= -1; i = fis.read()) {
  sos.write((byte)i);
}

And that is a basic cache filter. Two support classes are needed -- CacheResponseStream and CacheResponseWrapper -- but they are nothing more than implementations of the ServletOutputStream class and HttpServletResponseWrapper class that are appropriate for CacheFilter.java. The full source code for everything is given at the end of this article, but to keep things moving along, I'll have you use a JAR file that includes the compiled cache filter. If you didn't already for the compression filter, grab a copy of jspbook.jar and put it in the WEB-INF/lib directory of your favorite web application and deploy the filter to intercept all requests going to resources ending in .jsp, and reload the web application for the changes to take effect. Next we will make a simple JSP to test the code.

Here is the complete code for a simple JSP that tests the cache filter. The JSP wastes time and processing power by executing several loops. Save the following as TimeMonger.jsp somewhere in your web application.

<html>
  <head>
    <title>Cache Filter Test</title>
  </head>
  <body>
A test of the cache Filter.
<%
 // mock time-consuming code
 for (int i=0;i<100000;i++) {
   for (int j=0;j<1000;j++) {
     //noop
   }
 }
%>
  </body>
</html>

Browse to TimeMonger.jsp for the ever-so-sophisticated cache test. Notice how long it takes to generate the page; it should take several seconds due to the embedded for loops. Now browse to the page once again; notice that it appears near-instantly. Continue browsing to the page and notice it will continue to appear near-instantly. This is the cache filter in action. After the page is generated once, a copy is saved in your web application's temporary work directory (on Tomcat, this is in a subdirectory of ./work), and on subsequent requests, this cache is used instead of executing the JSP. You can test this by deleting the cache file located in your web application's temporary directory, and browsing to the page. Once again it will take several seconds to load. We can quantify the time difference by making a simple JSP that spoofs two HTTP requests and measuring the time it takes for each request to be answered. To ensure that the test works, we will have to delete the cache before running the JSP. This will force the first HTTP request to execute the JSP and allow the second request to hit the cache. Here is the code for the needed JSP.

<%@ page import="java.util.*,
                 java.net.*,
                 java.io.*" %>
<%
  String url = request.getParameter("url");
  long[] times = new long[2];
  if (url != null) {
    for (int i=0;i<2;i++) {
      long start =
        Calendar.getInstance().getTimeInMillis();
      URL u = new URL(url);
      HttpURLConnection huc =
        (HttpURLConnection)u.openConnection();
      huc.setRequestProperty("user-agent",
                             "Mozilla(MSIE)");
      huc.connect();
      ByteArrayOutputStream baos =
        new ByteArrayOutputStream();
      InputStream is = huc.getInputStream();
      while(is.read() != -1) {
        baos.write((byte)is.read());
      }
      long stop =
        Calendar.getInstance().getTimeInMillis();
      times[i] = stop-start;
    }
  }
  request.setAttribute("t1", new Long(times[0]));
  request.setAttribute("t2", new Long(times[1]));
  request.setAttribute("url", url);

%><html>
<head>
  <title>Cache Test</title>
</head>
<body>
<h1>Cache Test Page</h1>
Enter a URL to test.
<form method="POST">
<input name="url" size="50">
<input type="submit" value="Check URL">
</form>
 <p><b>Testing: ${url}</b></p>
 Request 1: ${t1} milliseconds<br/>
 Request 2: ${t2} milliseconds<br/>
 Time saved: ${t1-t2} milliseconds<br/>
</body>
</html>

Save the above code in your web application. Next, delete the temporary directory of that web application in order to ensure that there is no cache. Now browse to the cache test page. Initially, a blank page appears with a simple HTML form, as shown here.

Figure 4
Figure 4. Blank cache test page

Just as with the compression test page, fill out the URL that the cache-testing JSP should check. Any value will do, but for this example let us test TimeMonger.jsp -- a JSP we know takes a relatively long amount of time to execute. Here is what the cache-testing JSP returns after testing TimeMonger.jsp.

Figure 5
Figure 5. Cache test page used on TimeMonger.jsp

Notice that TimeMonger.jsp normally takes about five seconds to execute, but when a cache is used, it takes a hundredth of the time. If you like, try the page again and notice that the cache will continue to be used; each response will take about 50 milliseconds. However, if you delete the cache and force the JSP to execute, you will once again see the page take about five seconds to execute before it is once again cached.

The point to see is that CacheFilter.java is saving a copy of the HTML it used in a response and reusing it instead of executing dynamic code. This results in time-consuming and processor-intensive code being skipped. In TimeMonger.jsp, the skipped code was a few for loops -- admittedly, a poor example. But understand that the dynamic code can be anything, such as a database query or an execution of any custom Java code. The time it takes to retrieve content from the cache will always be about the same; in this example, it was about 50 milliseconds. Therefore, you can increase the speed of just about any dynamic page to be roughly 50 milliseconds, no matter how time intensive the page is.

Cache Filter Summary and Good Practice Tips

Once again you have been presented with a filter that is incredibly helpful and near-trivial to use. Caching can save enormous amounts of your server's time and processing power, and caching is as easy to implement as putting a copy of jspbook.jar in your web application's WEB-INF/lib directory and deploying the filter to intercept requests to any resource you want to cache. I suggest you use a caching filter as much as possible in order to speed your web application up to peak performance.

While caching can save a web application a lot of time and processing power, and it can make even the most complex server-side code appear to execute unbelievably fast, caching is not suitable for everything. Some pages can't be cached because the page's content must be dynamically generated each time the page is viewed -- for instance, a web site that lists stock quotes. Often, though, resources that are supposedly always dynamic can really be cached for short periods of time. For example, consider news.google.com: content is cached for a few minutes at a time to save server-side resources, but the cache is updated quick enough to make the site appear to be completely dynamic. In the given cache filter code, you can configure whether the filter caches a particular resource at all, and how long the filter uses a cache before updating it. Both of these are initial configuration elements.

  <filter-mapping>
    <filter-name>CacheFilter</filter-name>
    <url-pattern>*.jsp</url-pattern>
    <init-param>
      <param-name>/timemonger.jsp</param-name>
      <param-value>nocache</param-value>
    </init-param>
    <init-param>
      <param-name>cacheTimeout</param-name>
      <param-value>1</param-value>
    </init-param>
  </filter-mapping>

To tell the cache filter that a resource shouldn't be cached, set an initial configuration element of the same name as the resource's request URI to have a value of nocache. To configure how long the filter waits before updating cached content change the cacheTimeout initial configuration parameter to have a numerical value that represents the number of minutes a cache is valid. Both of these features are specific only to this cache filter. Feel free to examine CacheFilter.java to see exactly how they are implemented.

In general, a cache filter is a very powerful enhancement to add to a web application. Cached content can be served to users as fast as the server can read files from disk (or memory, if you keep the cache in RAM), which is almost always much faster than executing a servlet or JSP, especially complex, database-driven pages. However, caching must be done in an intelligent manner. Some pages simply can't be cached, or they can only be cached for a few minutes at a time. Make sure you cache as much of your web application's content for as long as you can, and be sure to configure the cache filter to appropriately handle pages that either shouldn't be cached or should only be cached for short periods of time.

Conclusion

Every web application should have a caching filter and a compression filter. These two filters optimize how quickly a web application generates content and how long it takes the content to be sent across the World Wide Web, both of which are arguably the most important tasks a web application performs. The code presented in this article provides a good implementation of each of these filters. The code is both free and open source. If you don't want to build your own caching and compression support, simply deploy the jspbook.jar with your web application and reap the rewards. If you do wish to develop your own caching and compression support, you have the full code to both of these filters, and you can get any updates to the code from the book's support site, www.jspbook.com. Take the code and go!

Links

Jayson Falkner is a J2EE developer, student, and webmaster of JSP Insider.


Return to ONJava.com.

Copyright © 2009 O'Reilly Media, Inc.