If you talk to any senior J2EE developers today, many of them will be happy to provide you with details on the different EJB types or how to use JMS to send and receive asynchronous messages. However, it may be difficult to find someone to describe a deployment architecture that can ensure scalability, reliability, and performance. One reason for the lack of understanding by many developers on deployment architecture is that the J2EE specification contains few details on application deployment, leaving most up to individual vendors. This leads to confusion, as each vendor devises its own unique way of deploying J2EE applications. In this article, we will first describe the different J2EE modules and the different packaging structures. Afterwards, we will discuss possible deployment architectures and some of the deployment issues to consider during the design and implementation of a J2EE application.
Our discussion assumes that you already have some understanding of the core J2EE technology and are comfortable with technologies such as servlets, JSP, EJB, and XML. For details of the J2EE specification, please visit the Sun J2EE web site.
J2EE 1.3 supports the packaging of J2EE applications into different deployable modules. There are two families of J2EE modules:
Basically, a Web Application Archive (WAR file) is used to deploy a web-based application. This file may contain servlets, HTML files, JSPs, and all associated images and resource files. On the other hand, an Enterprise Application Archive (EAR file) is a meta-container and may contain EJBs, resource adapters, and one or more web application modules. One area of consideration when packaging enterprise applications is the number of EAR files an application should use: should I include all of my EJBs in one EAR, or put each of them in different EAR files? This decision could have an impact on the performance, scalability, and maintainability of the application. We will discuss this further. For details on the individual module types, please refer to the J2EE 1.3 specification (PDF).
One area that is often overlooked during the design of a J2EE application
ClassLoader relationship between different module types. The Java
Virtual Machine (JVM) uses the
ClassLoader to locate and load
Class objects into memory. By default, the
will use the path information specified in the
environment variable to locate classes. It is also possible for an application
to provide its own
ClassLoader, a well-known example being the
used by the Servlet Engine to locate and initiate classes from a URL.
Depending on the implementation of the J2EE application server, there are at
least 3 levels of
As illustrated in Figure 1, the application server is using the
SystemClassLoader and it will only see resources on the system
CLASSPATH. There is a separate
ClassLoader within each EAR, RAR,
and WAR module. The exact relationship of these
ClassLoaders is not clearly
defined, but typically there is a parent-child relationship between the 4
ClassLoaders, whereby the child
ClassLoader will be able to locate
classes that are visible in the parent
ClassLoader, but not vice versa. In the
case of J2EE, the
SystemClassLoader is the parent of all EAR Classloaders,
where an EAR
ClassLoader is the parent of the WAR, and RAR
ClassLoader of the
enclosing WAR and RAR files.
ClassLoader Parent-Child Relationship
According to the Java specification, a child
ClassLoader must use its parent
ClassLoader first to locate a class before it will attempt to locate the class
itself. This may sound non-intuitive at first, but is necessary to prevent
ambiguity and conflict when there are multiple
ClassLoaders within the same
JVM. Some application servers allow you the option to change the lookup
behavior of the EAR or WAR
ClassLoader, but this is not recommended, as it can
lead to other types of problems (e.g.,
ClassCastException may occur if you
have two versions of the same class in different
The visibility restrictions between the different
ClassLoaders will affect
the choices you can make when packaging a J2EE application, especially when
dealing with resources and libraries.
A typical Three-tier enterprise application is organized into three major tiers:
In a large-scale enterprise solution, each tier will be deployed in separate domains to allow each domain to scaled differently based on business needs. In addition, load balancers may be deployed in front of the presentation tier to improve availability and support better fail-over. The business and data tiers tend to rely on clustering technology to provide fail-over support. The following diagram outlines the basic deployment architecture:
Figure 3. Typical Three-tier Deployment Architecture
Depending on the actual business use cases, there may be variations on the above architecture.
For a J2EE application, the presentation tier is usually handled through the use of servlets and JSPs, which can be packaged as one or more WAR files. The business tier is usually handled through the use of session beans (either stateless or stateful), which may be packaged as one or more EAR files. The data tier is usually handled through the use of entity beans (which control access to database resources) or a resource adapter (which controls access to legacy or non-JDBC resources), which may be packaged as one or more EAR files. The following diagram shows the same deployment architecture with specific J2EE resources and modules:
Figure 4. Java Enterprise Application Deployment Architecture
Deploying a J2EE application based on the above structure will provide the greatest flexibility, but there are other considerations that must come into play before the deployment architecture can be finalized.
It is common understanding that remote calls are more expensive and take longer to execute than local calls. During a remote procedure call, the local proxy objects must make copies of all of the arguments and transport them over the wire, through RMI, to the remote objects, resulting in increased network traffic and slower response time.
Consider the scenario where the user issues a command through a web page, which in turn invokes the servlet, followed by the processing of the business method in the session bean and entity bean. At least four network operations may occur, two of which are remote EJB calls:
Figure 5. Remote Call Sequence in a Typical Business Operation
In the case where a session bean may call other session beans or multiple entity beans, additional remote calls will be made. If performance is a top priority concern, adjustments must be made in the architecture and design to reduce the number of remote calls or reduce the cost of a remote call.
There are multiple ways to address this performance problem:
If the access interfaces are too fine-grained, it results in excessive network traffic and high latency. One must investigate the architecture and design to ensure the right level of granularity is being used. There are several J2EE design patterns that can control granularity. The two most famous are transfer object and Session Facade. Since the focus of our discussion is on deployment and packaging, please refer to the J2EE Blueprints web site to learn more about these patterns.
J2EE 1.3 introduces the concept of local enterprise beans. This will allow an entity bean to expose a local interface, thus allowing parameters to be passed by reference rather than by value. However, in order for a session bean to access local enterprise beans, the local enterprise beans must be packaged in the same EAR file as the session bean. There are two ways to achieve this:
Some J2EE application servers (e.g., BEA WebLogic) may optimize remote calls between beans (see the WebLogic Reference Document on Classloading) into local calls if the beans are in the same enterprise application. This is especially beneficial if one session bean may call multiple session beans or entity beans. There are drawbacks with this approach as well:
Another area where we should consider is the location of common resources
and libraries. One rule of thumb is that a resource should go together with
the J2EE modules that use it. However, if you have the case where a common
resource is being used by several modules, you may want to place it where it is
accessible by all of the modules. In addition, it is a bad practice to put
resources in your system
CLASSPATH as they may cause conflict with
other J2EE modules deploying in the same container. This will also limit your
options in terms of hot deployment.
Let's consider for a moment that there is a .properties file (a
resource bundle) containing information that is required by several web
applications. Because of the inherited
ClassLoader relationship and isolation
level, there are three ways to place this file:
CLASSPATH. This solves the first problem of the need to modify multiple copies of the same file. However, this approach also limits your re-deployment options, as resources loaded by the
SystemClassLoadercan only be reloaded if you restart the JVM. In this case, changes you make to your resources may not be picked up until you restart the application server. For J2EE solutions where high availability is a concern, this will result in a more complicated system maintenance sequence.
ClassLoadermay look into the EAR
ClassLoaderfor resources. This system allows the greatest flexibility, as there is now only one place to update the resources. The update will take effect immediately if the enterprise application is re-deployed (provided that the application server supports hot re-deployment of EAR files). However, in order for each WAR to locate the resource, you must put the path to the resource bundles in the MANIFEST file of the WAR package.
It is not unusual for a J2EE project to include third-party software that is
freely available (e.g., Apache's
log4j). In this situation, it is usually
better to place the JAR files for those libraries in a designated location in
the EAR. The libraries should only be placed on the system
CLASSPATH if the libraries will not conflict with existing
libraries used by the application server and if the need to change or update
those libraries is small. If you decide to put the common libraries inside of an
EAR module, please make sure:
ClassLoaders, you must put the path to the third-party libraries in the
MANIFESTfiles of the J2EE modules, including WAR and EJB files.
J2EE allows the development of highly scalable and highly available solutions in a relatively easy manner. As a result, a J2EE deployment architecture cannot be considered complete unless considerations are made for scalability and availability requirements.
As mentioned in the beginning of Section 2, splitting the application into multiple deployment units at different tiers will provide the most flexible scalability model. However, careful considerations must be made to ensure that you have the proper granularity and modularity to balance between flexibility and performance.
Hot deployment is the ability to deploy and re-deploy an enterprise
application without the need to stop and restart the application server. If
hot deployment is a requirement for your application, take care to ensure that
application resources, dependent third-party libraries, and other enterprise
modules are packaged in a self-contained manner. Avoid building any dependency
on the system
CLASSPATH. Ensure that everything an enterprise
application module may require is packaged within the same EAR file.
Depending on the security requirements of the application, different security mechanisms (SSL, VPN, J2EE module declarative security model) can be implemented between tiers to provide security to the method level. Although the J2EE specification describes the level of security support required by the application server, it does not dictate how security should be implemented; as a result, security implementation is vendor-specific. If your application requires the propagation of security identities between different security domains (as with distributed security), you should definitely consult with the vendor, as there may be restrictions that would limit your choices in deployment architecture.
In an ideal world, all of the system administrators that you will come across would be intimately familiar with J2EE and the application server. However, it is not uncommon to find out that although your system administrator is a UNIX or Windows expert, he or she really does not know much about J2EE. So, for the benefit of those who will be deploying and managing the application on a day-to-day basis, having clearly documented deployment steps is a must. It would be even better to provide deployment scripts that automate the deployment of your applications. Many application server vendors today provide a way to deploy applications either through the vender-specific administration console or the command line, or through the Java Management Extension (JMX) API (e.g., the BEA WebLogic Management API). However, because of the different deployment mechanisms used by different vendors, you need to create different scripts for different platforms.
As more and more J2EE applications are written, it becomes crucial for the J2EE specification to define a common standard whereby applications can be packaged and deployed. J2EE 1.3 provides some specification with the packaging standard; however, there are still no standards on how to deploy an application — this is where the initiative like JSR 88 (the J2EE Deployment API) comes in.
The goal of the JSR is to define a standard API set that will enable any deployment tool to deploy any J2EE modules onto a compliant J2EE application server. Specifically, the standard addresses the following areas of concern:
At this point, the JSR will be included as part of the J2EE 1.4 specification. With this new API, an application vendor may now create deployment tools or scripts that can automatically deploy his applications into different application servers without worrying about the differences in the deployment functions. However, there are still a number of areas that are not addressed by the JSR:
DataSources, and JMS, are still very much vendor-specific at this point.
Deployment is still an area in J2EE in which many developers are unfamiliar. If deployment issues are not considered during the architecture and design phases, you could easily run into situations that could lead to architecture and design changes. As illustrated above, there are many areas of consideration that should be taken into account from the beginning to ensure that your application will meet your scalability, performance, and availability requirements.
Allen Chan is the Director of Product Development for Wysdom Inc.
Return to ONJava.com.
Copyright © 2017 O'Reilly Media, Inc.