Finding which JAR contains a class

I often want to search through a large number of JAR files looking for a particular class, and every time I do this I wish I had some utility to make it easier.  So I finally made one:

#!/bin/sh

TARGET="$1"

for i in `find . -name "*jar"`
do
  jar tvf $i | grep $TARGET > /dev/null
  if [ $? == 0 ]
  then
    echo "$TARGET is in $i"
  fi
done

Update: My colleague Chris Johnson just posted a better version of this, with some caching over at the Fusion Security Blog.

Posted in Uncategorized | Tagged | 2 Comments

Improving JMS Performance on WebLogic

WebLogic Server includes a feature called the ‘JMS Wrapper’ that you can take advantage of to dramatically improve the performance of JMS applications running on WebLogic Server.  Note that this is not for remote JMS clients, this is for JMS programs running on WebLogic Server, like Servlets or EJBs.

In this post, I will demonstrate how to use the JMS Wrapper to improve the performance of a Servlet based JMS application.  On my Mac Book Pro, I was able to process 1,000 messages in about 7 seconds on average without JMS Wrapper, and in about 1 second on average with JMS Wrapper.  That is about seven times faster.

To get this improvement, all that is required is to change the way we lookup the QueueConnectionFactory and Queue.  This means changing two lines of code, and adding some extra information to the deployment descriptors.  Not difficult at all.

So let’s walk through an example.  We will build two separate web applications, which are the same except that one will use the JMS Wrapper and the other will not.  The examples in this post were built using Maven 2, Java 1.6.0_20 and WebLogic 10.3.4 (64-bit) on Mac OS X 10.6.

You can download the completed projects from http://www.samplecode.oracle.com using the following Subversion command:

svn checkout https://www.samplecode.oracle.com/svn/jmswrapper

This will give you a new directory called jmswrapper which contains a directory called trunk which contains the two projects.  Each project has a Maven POM in its root directory so that you can build and test it.  You will need to update the POM to include your WebLogic Server details.  This will be covered below.

If you prefer to create the projects from scratch, you can follow through the instructions here.  In this example, we are using Maven to build, deploy and test the project.  If you don’t want to use Maven, you will need to run the appropriate commands to compile and package your applications and test code and deploy and execute them manually.

First, let’s create a project that does not use the WebLogic JMS Wrapper functionality.  We will use this as our base line, to compare performance against.

Create a new web application using Maven:

mvn archetype:generate

Select the webapp-javaee6 archetype and provide the coordinates to create your project.  I used the following:

groupId:    com.redstack
artifactId: jmsNoWrapper
version:    1.0_SNAPSHOT
package:    com.redstack

Next, we need to set up the Maven POM with the necessary dependencies for JMS and with information so we can deploy to WebLogic.  We will also configure Maven to execute our performance tests automatically for us.  Here is my POM:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.redstack</groupId>
    <artifactId>jmsNoWrapper</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <name>jmsNoWrapper Web App</name>

    <properties>
        <endorsed.dir>${project.build.directory}/endorsed</endorsed.dir>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-web-api</artifactId>
            <version>6.0</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.8.1</version>
            <scope>test</scope>
        </dependency>

        <dependency>
          <groupId>com.oracle.soa</groupId>
          <artifactId>wlthinclient</artifactId>
          <version>11.1.1.4</version>
          <type>jar</type>
          <scope>compile</scope>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                    <compilerArguments>
                        <endorseddirs>${endorsed.dir}</endorseddirs>
                    </compilerArguments>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.1</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.1</version>
                <executions>
                    <execution>
                        <phase>validate</phase>
                        <goals>
                            <goal>copy</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${endorsed.dir}</outputDirectory>
                            <silent>true</silent>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>javax</groupId>
                                    <artifactId>javaee-endorsed-api</artifactId>
                                    <version>6.0</version>
                                    <type>jar</type>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
              <configuration>
                <skip>true</skip>
                <!-- skip the default run so we dont run test twice -->
              </configuration>
              <executions>
                <execution>
                  <id>unit-tests</id>
                  <phase>test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*Test.java</include>
                    </includes>
                    <excludes>
                      <exclude>**/*IntegrationTest.java</exclude>
                    </excludes>
                  </configuration>
                </execution>
                <execution>
                  <id>integration-tests</id>
                  <phase>integration-test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*IntegrationTest.java</include>
                    </includes>
                  </configuration>
                </execution>
              </executions>
            </plugin>
            <plugin>
              <groupId>com.oracle.weblogic</groupId>
              <artifactId>weblogic-maven-plugin</artifactId>
              <version>10.3.4</version>
              <configuration>
                <adminurl>t3://localhost:7001</adminurl>
                <user>weblogic</user>
                <password>welcome1</password>
                <name>jmsNoWrapper</name>
                <remote>true</remote>
                <upload>true</upload>
                <targets>myserver</targets>
              </configuration>
              <executions>
                <execution>
                  <id>deploy</id>
                  <phase>pre-integration-test</phase>
                  <goals>
                    <goal>deploy</goal>
                  </goals>
                  <configuration>
                    <source>target/jmsNoWrapper.war</source>
                  </configuration>
                </execution>
              </executions>
          </plugin>
        </plugins>
        <finalName>jmsNoWrapper</finalName>
    </build>

    <distributionManagement>
      <!-- use the following if you're not using a snapshot version. -->
      <repository>
        <id>local</id>
        <name>local repository</name>
        <url>file:///Users/mark/.m2/repository</url>
      </repository>
      <!-- use the following if you ARE using a snapshot version. -->
      <snapshotRepository>
        <id>localSnapshot</id>
        <name>local snapshot repository</name>
        <url>file:///Users/mark/.m2/repository</url>
      </snapshotRepository>
    </distributionManagement>

</project>

The first thing we need to do is add the WebLogic ‘thin client’ as a dependency.  I put it into my local Maven repository so that I can add it to the POM as follows:

        <dependency>
          <groupId>com.oracle.soa</groupId>
          <artifactId>wlthinclient</artifactId>
          <version>11.1.1.4</version>
          <type>jar</type>
          <scope>compile</scope>
        </dependency>

Before this will work, we need to put the JAR file into our local Maven repository using the following commands:

# cd <WL_HOME>/server/lib
# mvn install:install-file
  -Dfile=wlthinclient.jar
  -DgroupId=com.oracle.soa
  -DartifactId=wlthinclient
  -Dversion=11.1.1.4
  -Dpackaging=jar
  -DgeneratePom=true

You may want to choose different coordinates.  I put this in the same group as many of the other JAR files that I commonly use when writing code against SOA Suite or BPM Suite, so I used the same version and groupId I use for those libraries.

Next, we want to configure Maven to run our performance test for us after it has deployed the web application to WebLogic Server.  This is done using the following plugin section in the POM.

Here we are telling Maven that Java source files in our test directory that end with ‘Test’ are unit tests that should be run in the test phase, before packaging and deployment, and that Java source files that end with ‘IntegrationTest’ are integration tests that should be run in the integration-test phase, after deployment.

            <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
              <configuration>
                <skip>true</skip>
                <!-- skip the default run so we dont run test twice -->
              </configuration>
              <executions>
                <execution>
                  <id>unit-tests</id>
                  <phase>test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*Test.java</include>
                    </includes>
                    <excludes>
                      <exclude>**/*IntegrationTest.java</exclude>
                    </excludes>
                  </configuration>
                </execution>
                <execution>
                  <id>integration-tests</id>
                  <phase>integration-test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*IntegrationTest.java</include>
                    </includes>
                  </configuration>
                </execution>
              </executions>
            </plugin>

Finally, we have the following plugin section that configures deployment to WebLogic.  You will need to have the WebLogic Maven plugin installed to use this.  See this post for details.  The configuration should be reasonably self explanatory.  The name element is the name of the web application on WebLogic server and targets is the name of the server/cluster to deploy the application to.  Note that the name of the target WAR file is listed in the source element.

            <plugin>
              <groupId>com.oracle.weblogic</groupId>
              <artifactId>weblogic-maven-plugin</artifactId>
              <version>10.3.4</version>
              <configuration>
                <adminurl>t3://localhost:7001</adminurl>
                <user>weblogic</user>
                <password>welcome1</password>
                <name>jmsNoWrapper</name>
                <remote>true</remote>
                <upload>true</upload>
                <targets>myserver</targets>
              </configuration>
              <executions>
                <execution>
                  <id>deploy</id>
                  <phase>pre-integration-test</phase>
                  <goals>
                    <goal>deploy</goal>
                  </goals>
                  <configuration>
                    <source>target/jmsNoWrapper.war</source>
                  </configuration>
                </execution>
              </executions>
          </plugin>

You should also add a distributionManagement section as shown in the example above.

To check you have everything set up right, you can run a deploy:

# mvn deploy

This will compile everything, run the tests (if we had any), package up a WAR file, deploy it to WebLogic and then run the integration tests (if we had any – we will write one soon).  You should then be able to see the application deployed on WebLogic.  You can check by looking at the Deployments in the WebLogic Server console.  You should be able to access that on your machine at http://yourmachine:7001/console – you will need to substitute the correct name and port for your machine.  You should also be able to run the web application by going to http://yourmachine:7001/jmsNoWrapper – again substitute in the correct name and port for your environment.  You should see a blank page with the message ‘Hello World!’ on it.  This is the default page created by Maven.

So now we have everything set up, let’s write a Java Servlet that will send a JMS message.  Later we will write an integration test that will run this Servlet 1,000 times and measure the performance.

Here is the code for the Servlet.  This goes in src/main/java/com/redstack/JMSServlet.java:

package com.redstack;

import java.io.IOException;
import java.io.PrintWriter;

import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import javax.naming.InitialContext;
import javax.naming.NamingException;

import javax.jms.QueueConnectionFactory;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueSession;
import javax.jms.QueueSender;
import javax.jms.TextMessage;
import javax.jms.JMSException;

public class JMSServlet extends HttpServlet {

  @Override
  public void init(ServletConfig config) throws ServletException {
    super.init(config);
  }

  public void doGet(HttpServletRequest request, HttpServletResponse response)
  throws ServletException, IOException {
    response.setContentType("text/html");
    PrintWriter out = response.getWriter();
    out.println("<html><head><title>JMSServlet</title></head><body><h1>JMS Wrapper Demo</h1>");

    QueueConnection conn = null;

    try {
      InitialContext ctx = new InitialContext();
      QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("jms/QCF");
      Queue queue = (Queue) ctx.lookup("jms/myQueue");
      ctx.close();
      conn = qcf.createQueueConnection();
      QueueSession session = conn.createQueueSession(false, 0);
      QueueSender sender = session.createSender(queue);
      TextMessage msg = session.createTextMessage("this is a test message");
      sender.send(msg);
    } catch (Exception ignore) {}
    try {
      conn.close();
    } catch (Exception ignore) {}

    out.println("</body></html>");
    out.close();
  }

  public void destroy() {}

}

This is a simple, standard Java Servlet that implements the doGet() method, which will be executed when WebLogic receives a request the Servlet’s URL – we will set that up shortly.

The Servlet just prints out some HTML and sends a JMS message.  You can see in there the bare essential code to send a JMS message.  This code is based on the JMS client found in this post.  In order to run this, we will need to set up some JMS resources on WebLogic Server – we will come to that shortly.

Notice that we create the QueueConnectionFactory, Queue, QueueSender, etc. every time and then close the QueueConnection when we are finished.  This is standard JMS practice, nothing specific to WebLogic.

We need to register our Servlet in the application’s standard Java deployment descriptor, web.xml.  This file is located in src/main/webapp/WEB-INF/web.xml.  Here is mine:

<?xml version="1.0" encoding="UTF-8"?>
<web-app
  xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
  id="worklist" version="2.5">
  <display-name>JMS Wrapper Example</display-name>

  <servlet>
    <servlet-name>JMSServlet</servlet-name>
    <servlet-class>com.redstack.JMSServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
  </servlet>

  <servlet-mapping>
    <servlet-name>JMSServlet</servlet-name>
    <url-pattern>/JMSServlet</url-pattern>
  </servlet-mapping>

  <welcome-file-list>
    <welcome-file>/index.jsp</welcome-file>
  </welcome-file-list>

</web-app>

Here we map the Servlet to the URL /JMSServlet inside the web application, i.e. http://yourmachine:7001/jmsNoWrapper/JMSServlet.

If you like, you can put a link on the web application’s home page, stored in src/main/webapp/index.jsp, so that you can manually run the Servlet from a browser.  I added the following to the HTML BODY:

<h1>JMS Wrapper Demo</h1>
<a href="JMSServlet">Send some messages</a>

Finally, we need to write our Integration Test to measure the performance of the Servlet.  Create a new file src/test/java/com/redstack/PerformanceIntegrationTest.java.  Here is the code for this class:

package com.redstack;

import junit.framework.TestCase;
import java.net.URL;
import java.net.URLConnection;
import java.io.BufferedReader;
import java.io.InputStreamReader;

public class PerformanceIntegrationTest extends TestCase {

  public void testHandleRequestView() throws Exception {

    System.out.println("+++ This is the performance test +++");

    long start = System.nanoTime();
    // hit the page 1000 times
    for (int i = 0; i < 1000; i++) {
      URL thePage = new URL("http://localhost:7001/jmsNoWrapper/JMSServlet");
      URLConnection conn = thePage.openConnection();
      BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
      String inputLine;
      while ((inputLine = in.readLine()) != null) {
        // read the whole page
      }
      in.close();
    }
    long end = System.nanoTime();

    System.out.println("Hit the page 1000 times, sending 1000 messages with full open/close\n"
      +"of JMS objects, taking " + (end - start) / 1000000 + " milliseconds.");
    assertEquals("hello", "hello");

  }

}

This class will be run in the integration-test phase of the build, after the deployment of the application to WebLogic.  It will run the Servlet 1,000 times and print out the time taken in milliseconds to process the 1,000 messages.  We will see this in the output from Maven when we do a deployment.

Now let’s set up the JMS resources we need on WebLogic Server.  I used the ZIP distribution (discussed in this post).  Depending how you installed WebLogic, you may already have some of these resources.  If they are already there, go ahead and use the ones you have, otherwise create new ones.

Log in to the WebLogic Server console using your administrative user (probably weblogic) and go to the JMS Servers page by clicking on the link under Services or using the navigation tree on the left hand side.

Click on the New button to create a JMS Server.

Enter a name for your JMS Server.  I called mine JMSServer1.  Then click on Next.

Choose the server that you want this JMS Server deployed on.  If you are using the ZIP distribution like me, you will only have one to choose from.  Then click on Finish.

Now we need to create a JMS Module.  Go back to the Home page and click on the link for JMS Modules under Services or use the navigation tree on the left hand side.

Click on the New button to create a new JMS Module.  Enter a name for your JMS Module and then click on Next.  I called mine JMSModule1.

Select the server you want to target this JMS Module to then click on Next.

Now click on Finish.

Navigate to your new JMS Module – which will now be listed on the JMS Modules page – and click on the link to view the settings, as shown below.  Then click on the New button to create a new resource.

We need to create a Connection Factory first.  Select Connection Factory from the list and click on Next.

Enter a name and JNDI name for the Connection Factory.  These need to match the code.  You should use the same names as shown here, or update your Servlet code if you use different names.  I called mine QCF and jms/QCF as shown below.  Click on Next.

Select the target and click on Finish.  You should only have the one target listed.

We also need to create the Queue.  Return to the setting pages for your JMS Module again and click on New.  Choose Queue from the list and click on Next.  Enter a name and JNDI name for your queue as shown.  These also need to match the Servlet code.  I used myQueue and jms/myQueue.  Click on Next.

Click on the Create a New Subdeployment button (you only need to do this the first time – if you create additional queues later on you will not need to repeat this step).

Enter a name for your new subdeployment and click on OK.  I called mine subdeployment1.

Now choose the subdeployment and target for the queue and click on Finish.

If you return to the settings page for your JMS Module, you will now see the new resources listed as shown below.

That’s all the JMS configuration we need to do for this example.  Now we are ready to deploy our application.  From the jmsNoWrapper directory, issue the following command:

# mvn deploy

This will build and deploy our application, and run the performance test.  You will see the results of the performance test in the Maven output.  Here is an example of the relevant part of the output:

[INFO] [surefire:test {execution: integration-tests}]
[INFO] Surefire report directory: /Users/mark/src/jmsNoWrapper/target/surefire-reports

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.redstack.PerformanceIntegrationTest
+++ This is the performance test +++
Hit the page 1000 times, sending 1000 messages with full open/close
of JMS objects, taking 7148 milliseconds.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.213 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

You can run this a few times to repeat the test.  On my machine I got the following results from four consecutive runs: 7148ms, 8773ms, 5140ms, 6274ms, giving an average of 6834ms.

So now we have a base line to compare against.  Let’s create a project that does use the JMS Wrapper and compare the performance.

If you grabbed the code from our Subversion repository (see above) then just take a look in the trunk/jmsWrapper directory.  If you are building your own from scratch, they could can just copy your whole jmsNoWrapper directory and name the copy jmsWrapper.  There are only a few changes we need to make.

In the POM, we need to update the name of the application.  Here is the new POM:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.redstack</groupId>
    <artifactId>jmsWrapper</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <name>jmsWrapper Web App</name>

    <properties>
        <endorsed.dir>${project.build.directory}/endorsed</endorsed.dir>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-web-api</artifactId>
            <version>6.0</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.8.1</version>
            <scope>test</scope>
        </dependency>

        <dependency>
          <groupId>com.oracle.soa</groupId>
          <artifactId>wlthinclient</artifactId>
          <version>11.1.1.4</version>
          <type>jar</type>
          <scope>compile</scope>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                    <compilerArguments>
                        <endorseddirs>${endorsed.dir}</endorseddirs>
                    </compilerArguments>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.1</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.1</version>
                <executions>
                    <execution>
                        <phase>validate</phase>
                        <goals>
                            <goal>copy</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${endorsed.dir}</outputDirectory>
                            <silent>true</silent>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>javax</groupId>
                                    <artifactId>javaee-endorsed-api</artifactId>
                                    <version>6.0</version>
                                    <type>jar</type>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
              <configuration>
                <skip>true</skip>
                <!-- skip the default run so we dont run test twice -->
              </configuration>
              <executions>
                <execution>
                  <id>unit-tests</id>
                  <phase>test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*Test.java</include>
                    </includes>
                    <excludes>
                      <exclude>**/*IntegrationTest.java</exclude>
                    </excludes>
                  </configuration>
                </execution>
                <execution>
                  <id>integration-tests</id>
                  <phase>integration-test</phase>
                  <goals>
                    <goal>test</goal>
                  </goals>
                  <configuration>
                    <skip>false</skip>
                    <includes>
                      <include>**/*IntegrationTest.java</include>
                    </includes>
                  </configuration>
                </execution>
              </executions>
            </plugin>
            <plugin>
              <groupId>com.oracle.weblogic</groupId>
              <artifactId>weblogic-maven-plugin</artifactId>
              <version>10.3.4</version>
              <configuration>
                <adminurl>t3://localhost:7001</adminurl>
                <user>weblogic</user>
                <password>welcome1</password>
                <name>jmsWrapper</name>
                <remote>true</remote>
                <upload>true</upload>
                <targets>myserver</targets>
              </configuration>
              <executions>
                <execution>
                  <id>deploy</id>
                  <phase>pre-integration-test</phase>
                  <goals>
                    <goal>deploy</goal>
                  </goals>
                  <configuration>
                    <source>target/jmsWrapper.war</source>
                  </configuration>
                </execution>
              </executions>
          </plugin>
        </plugins>
        <finalName>jmsWrapper</finalName>
    </build>

    <distributionManagement>
      <!-- use the following if you're not using a snapshot version. -->
      <repository>
        <id>local</id>
        <name>local repository</name>
        <url>file:///Users/mark/.m2/repository</url>
      </repository>
      <!-- use the following if you ARE using a snapshot version. -->
      <snapshotRepository>
        <id>localSnapshot</id>
        <name>local snapshot repository</name>
        <url>file:///Users/mark/.m2/repository</url>
      </snapshotRepository>
    </distributionManagement>

</project>

We also need to update the src/main/webapp/WEB-INF/web.xml to include a resource-ref to point to the JMS resources we want to use.  Here is the web.xml:

<?xml version="1.0" encoding="UTF-8"?>
<web-app
  xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
  id="worklist" version="2.5">
  <display-name>JMS Wrapper Example</display-name>

  <servlet>
    <servlet-name>JMSServlet</servlet-name>
    <servlet-class>com.redstack.JMSServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
  </servlet>

  <servlet-mapping>
    <servlet-name>JMSServlet</servlet-name>
    <url-pattern>/JMSServlet</url-pattern>
  </servlet-mapping>

  <welcome-file-list>
    <welcome-file>/index.jsp</welcome-file>
  </welcome-file-list>

  <resource-ref>
    <res-ref-name>jms/wrappedQCF</res-ref-name>
    <res-type>javax.jms.QueueConnectionFactory</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>
  </resource-ref>

  <resource-env-ref>
    <resource-env-ref-name>jms/wrappedQueue</resource-env-ref-name>
    <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
  </resource-env-ref>

</web-app>

Notice that we define a resource called jms/wrappedQCF which is a javax.jms.QueueConnectionFactory, and a resource called jms/wrappedQueue which is a javax.jms.Queue.

We also need to create a WebLogic deployment descriptor that maps these to the real resources.  Create a file called src/main/webapp/WEB-INF/weblogic.xml.  Here are the contents for this file:

<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 9.1//EN"
 "http://www.bea.com/servers/wls810/dtd/weblogic 810-web-jar.dtd">
<weblogic-web-app>
  <context-root>/jmsWrapper</context-root>
  <jsp-descriptor>
    <page-check-seconds>1</page-check-seconds>
  </jsp-descriptor>
  <resource-description>
    <res-ref-name>jms/wrappedQCF</res-ref-name>
    <jndi-name>jms/QCF</jndi-name>
  </resource-description>
  <resource-env-description>
    <res-env-ref-name>jms/wrappedQueue</res-env-ref-name>
    <jndi-name>jms/myQueue</jndi-name>
  </resource-env-description>
</weblogic-web-app>

In this file we define the mappings from the names we just defined in the web.xml to the real resource names.  You can see here we define that jms/wrappedQCF is really jms/QCF and jms/wrappedQueue is really jms/myQueue.

The presence of these settings is what tells WebLogic Server to enable the JMS Wrapper functionality.  So having these settings here in the deployment descriptor, and using the names defined in the web.xml to look up the resources will give you the performance improvements.

You need to change the Servlet code to use the indirect names.  The two lines that need to be changed are the lookups:

      QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("jms/wrappedQCF");
      Queue queue = (Queue) ctx.lookup("jms/wrappedQueue");

Finally, we need to change the URL in our Integration Test to point to the new application.  The line we need to change is as follows:

       URL thePage = new URL("http://localhost:7001/jmsWrapper/JMSServlet");

You can now deploy the new version of the application that uses the JMS Wrapper by executing the following command from the jmsWrapper directory:

# mvn deploy

Again, you will see the results of the performance test in the Maven output.  On my machine I got the following results from four consecutive tests: 1048ms, 1047ms, 1017ms, 1045ms, giving and average of 1039ms.

So by making a couple of simple changes, we improved the performance of our JMS Servlet by around seven times, from 6834ms for 1,000 messages to only 1039ms (on average).

Notice that we need to close the connection in our Servlet.  This is important.  If we don’t do that, WebLogic will not know that that connection is available to be returned to the pool for reuse.

If you want to learn more about JMS on WebLogic, you might like to start with the documentation here or take a look at Chapter 10 of Professional Oracle WebLogic Server by Robert Patrick et al.  Enjoy!

Posted in Uncategorized | Tagged , , , | Leave a comment

Recommended patch for BPM and SOA 11.1.1.3 users

If you are using SOA Suite or BPM Suite 11.1.1.3 and you are not planning to upgrade to 11.1.1.4 just yet, you are strongly encouraged to install “Bundle Patch 2.”  The patch number is 12700861 and you can find information and instructions on My Oracle Support in document number 1288864.1.

Posted in Uncategorized | Tagged , | 1 Comment

Deploying WebLogic applications with Maven

In my last post, I talked about one exciting new feature in WebLogic 11g, support for Mac OS X.  Now, I want to cover another exciting new feature – a Maven plugin which allows you to incorporate deployment of your applications to WebLogic into your Maven builds.

I love this new feature because it makes it so easy to compile all my Java source and web artefacts, build a deployment archive and deploy it WebLogic Server in one easy action!  I hope you like it too!

A big thank you to Steve Button for bringing this to my attention, helping me to get it working on Mac OS X and sharing some helpful hints and tips.

Setting up the plugin

First, let’s set up the Maven plugin.  You will need WebLogic Server 10.3.4 installed to do this.  I used the convenient new ‘ZIP Distribution’ which is built for developers.  I used Maven 2.2.1 and JDK 1.6 on Mac OS X 10.6 to write this article.

First, we need to create the plugin using the wljarbuilder utility.  Execute the following commands from your WebLogic home directory:

# cd <WL_HOME>/wlserver/server/lib
# java -jar wljarbuilder.jar -profile weblogic-maven-plugin

This will create a new file called weblogic-maven-plugin.jar in that same directory.  Next, we need to extract the Maven POM from the jar file, and place it in the same directory.  You may want to extract it into a temporary directory and then move it.

# jar xvf <WL_HOME>/wlserver/server/lib/weblogic-maven-plugin.jar META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml
# mv META-INF/maven/com.oracle.weblogic/weblogic-maven-plugin/pom.xml <WL_HOME>/wlserver/server/lib

Before installing the WebLogic Maven plugin into your local repository, it is important to make sure all of the necessary dependencies have been downloaded.

# cd <WL_HOME>/wlserver/server/lib
# mvn install

Note: before you continue, if you want to use the WebLogic plugin from the command line, you might want to set up a short name so you can type shorter goal names.  You have to do this now, before you continue.  To set up a short name, weblogic, you will need to add the following entries to your ~/.m2/settings.xml Maven configuration file…

<pluginGroups>
  <pluginGroup>com.oracle.weblogic</pluginGroup>
</pluginGroups>

… and add the following to the pom.xml that you just extracted into your <WL_HOME>/wlserver/server/lib directory.

<build>
  <plugins>
    <plugin>
      <artifactId>maven-plugin-plugin</artifactId>
      <version>2.3</version>
      <configuration>
        <goalPrefix>weblogic</goalPrefix>
      </configuration>
    </plugin>
  </plugins>
</build>

Now we can install the plugin.  You must run mvn install first to make sure it downloads the dependencies and recognises the prefix we have defined.

# mvn install
# mvn install:install-file -Dfile=weblogic-maven-plugin.jar -DpomFile=pom.xml

The plugin will now be available for use.  We can execute the ‘help’ goal to test it:

# mvn com.oracle.weblogic:weblogic-maven-plugin:help

Or, if you set up the short name:

# mvn weblogic:help

Using the plugin to deploy an application

Probably the most convenient way to use the WebLogic Maven plugin, is to incorporate it into your standard build.  Let’s look at how we can set up a project to compile, test, package and deploy to WebLogic by simply typing mvn deploy.

First, create a new Maven project.  I used mvn archetype:generate, selected the webapp-javaee6 template, and provided the necessary details.  I called my project wldemo.

We need to add a few items to the pom.xml to tell Maven to deploy the application to WebLogic.

<plugin> 
  <groupId>com.oracle.weblogic</groupId>
  <artifactId>weblogic-maven-plugin</artifactId>
  <version>10.3.4</version>
  <configuration>
    <adminurl>t3://localhost:7001</adminurl>
    <user>weblogic</user>
    <password>password</password>
    <name>wldemo</name>
    <remote>true</remote>
    <upload>true</upload>
    <targets>AdminServer</targets>
  </configuration>
  <executions>
    <execution>
      <id>deploy</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>deploy</goal>
      </goals>
      <configuration>
        <source>target/wldemo.war</source>
      </configuration>
    </execution>
  </executions>
</plugin>
...
<distributionManagement>
  <!-- use the following if you're not using a snapshot version. -->
  <repository>
    <id>local</id>
    <name>local repository</name>
    <url>file:///Users/mark/.m2/repository</url>
  </repository>
  <!-- use the following if you ARE using a snapshot version. -->
  <snapshotRepository>
    <id>localSnapshot</id>
    <name>local snapshot repository</name>
    <url>file:///Users/mark/.m2/repository</url>
  </snapshotRepository>
</distributionManagement>

You will have to update the settings to match your environment, as follows:

Setting      Meaning
-----------  ----------------------------------------
adminurl     WebLogic T3 URL for your Admin Server
user         WebLogic administrative user
password     WebLogic administrative user's password
name         Name of application
remote       Indicates a remote server
upload       Indicates the war file must be uploaded
targets      The server, cluster, etc. to deploy to

In addition to the WebLogic settings in the plugins section, you need to set up the distributionManagement section to point to your local repository so that the deploy phase can run successfully.

Now we are ready to build, test and deploy our application!  In real life, you might want to write some code first – but for this example, we will just use the simple JSP page that comes in the template.

# mvn deploy

After a few moments, you application will be built and deployed.  You can see it right there in the WebLogic Server console:

You can now hit the application at http://yourserver:7001/wldemo.  You should get a clean page with the message ‘Hello World!

Now you can make changes and quickly and easily deploy them to your server for testing with a single, simple command!  The plugin will also let you start and stop applications and undeploy them.

Enjoy!

Where to find more information

You might want to read over the documentation here for more details on the goals and parameters that are available in the plugin.

Posted in Uncategorized | Tagged , | 7 Comments

Installing WebLogic Server on Mac OS X

One of the exciting new features in WebLogic Server 11g is support for Mac OS X.  The new ‘ZIP Distribution’ of WebLogic Server allows developers to run WebLogic Server on Mac OS X.  The download file is much smaller and the install is much faster than downloading JDeveloper, which also includes WebLogic Server that can be used on the Mac.

To install Weblogic Server on your Mac, follow these steps.

First, you need to download WebLogic Server 11g (10.3.4) ZIP Distribution from OTN.

Unzip this into new directory, which we will call MW_HOME.  I created a directory called WebLogic/Middleware in my home directory.

# mkdir ~/WebLogic
# cd ~/WebLogic
# mkdir Middleware
# cd Middleware
# unzip ../Downloads/wls1034_dev.zip 

Run the configuration script to setup WebLogic Server.

# export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
# export MW_HOME=/Users/mark/WebLogic/Middleware
# ./configure.sh
# export USER_MEM_ARGS="-Xmx1024m -XX:MaxPermSize=256m"
# . wlserver/server/bin/setWLSenv.sh

Start WebLogic Server for the first time and create a domain.

# cd ~/WebLogic
# mkdir domain
# cd domain
# java –Xmx1024m –XX:MaxPermSize=256m weblogic.Server

WebLogic will notice this is the first time it has been started and will ask you if you want to create a domain.  Answer ‘y’ to this question, and then provide a username and password when requested.

Would you like the server to create a default configuration and boot? (y/n): y
Enter username to boot WebLogic server: weblogic
Enter password to boot WebLogic server: password
For confirmation, please re-enter password required to boot WebLogic server: password

WebLogic Server will start up.  When it is up you will see a message indicating that the server was started in RUNNING mode.  Once you see this, you can shut it down (type Ctrl-C).

You can now use the startup scripts in your newly created domain.  When you want to start WebLogic Server, enter the following commands (you might want to create a script to do this):

# export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
# export MW_HOME=/Users/mark/WebLogic/Middleware
# export USER_MEM_ARGS="-Xmx1024m -XX:MaxPermSize=256m"
# . /Users/mark/WebLogic/Middleware/wlserver/server/bin/setWLSenv.sh
# /Users/mark/WebLogic/domain/startWebLogic.sh

Note:  If you copy and paste this, you may need to make sure that characters like “ and – are correct – when I pasted into terminal I got some incorrect characters.  You may also want to adjust the memory settings to suit your machine, and you might want to consider some of the other recommended JVM settings here.

You can now log on to the WebLogic Server console at http://yourmac:7001/console

Enjoy!

Posted in Uncategorized | Tagged , | 5 Comments

Changing the default editor/view in JDeveloper

If you work a lot with complex ADF pages in JDeveloper, you may get tired of waiting for it to open your pages in the Design view, I know I do!  Fortunately, there is a simple solution.  You can change the default editor so that they will open in source view (which is very fast) and then you can switch to design view only when you actually want to.

To do this, select Preferences from the Tools menu, then select File Types in the navigation tree on the left hand side.  Open the Default Editors tab on the right, find JSP Source and change the default editor to Source using the drop down box at the bottom.  Then click on OK.

image

You can also do this for other types of files.

Posted in Uncategorized | Tagged , , | 3 Comments

Using Oracle XE with BPM 11.1.1.4

If you want to use the Oracle XE database in your development/test environments with BPM 11.1.1.4, you will need to make sure to follow the advice on the download page:

If you want to use XE you can set the RCU_JDBC_TRIM_BLOCKS environment variable to TRUE *prior* to running RCU.  As a reminder as to support level, when running RCU against XE you will receive a warning stating that the database version is not supported by Oracle.

If you do not follow this advice, you will get a few invalid objects in the database.  Remember that XE will work fine for development environments, but it is not actually a supported database for the BPM repository.  In production, you should use one of the supported databases.

Posted in Uncategorized | Tagged , , | 1 Comment

What BPM adds to SOA Suite

Oracle has just released Oracle SOA Suite and Oracle BPM Suite 11.1.1.4 (often referred to as ‘Patch Set 3,’) the second release that includes comprehensive support for both Business Process Modeling Notation (BPMN) and Business Process Execution Language (BPEL) for modeling and executing business processes.

Organisations who have been using Oracle SOA Suite (and BPEL) for several years now sometimes ask us what extra value Oracle BPM Suite adds to the already rich SOA platform they are used to. And process analysts and integration developers often ask about the relative strengths of BPEL and BPMN – which to use when, and how they complement each other.

It turns out there is a lot of extra value added by the addition of Oracle BPM Suite – which is basically a superset of Oracle SOA Suite, but at the same time, it fits seamlessly into an existing SOA Suite environment and uses the same development tools, deployment and build processes, management and monitoring infrastructure and the same programming model – Service Component Architecture (SCA).

Oracle BPM Suite sits right on top of the solid foundation provided by Oracle SOA Suite.  Because of this is inherits significant integration capabilities.  It really is a ‘best of both worlds’ – providing excellent feature sets and capabilities for both business and technical people working in the business process management space.  The strong BPM capabilities really complement the SOA foundation.  It’s hard to ‘do BPM’ well without SOA, and you could argue that SOA lacks a real purpose without BPM.  A lot of people who have tried to justify an investment in SOA have found it very difficult to build a successful business case without tying SOA to business driven BPM initiatives.

Organisations with a significant investment in Oracle SOA Suite should see Oracle BPM Suite as an upgrade which provides additional value – and they wont need to retrain staff, replace existing infrastructure or migrate existing artifacts to realise the additional value added.

So, on the occasion of the second release of Oracle BPM Suite in the 11g release stream, let’s take a deep dive into the value it brings to the table and also look at how well it is integrated with Oracle SOA Suite.

The right tool for the right job

BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.)

So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job.

There are different ways of looking at which language is more suited for various kinds of processes.  Let’s look a two common approaches – these both provide high level guidance and are not meant to be exhaustive or mutually exclusive.  Nor do they replace the need to do your own research and possibly a small ‘proof of concept’ modeling activity to validate which is right in your environment with your people and skills.

The ‘who is the audience’ approach

This approach looks at who is going to be doing the process modeling and whether models are going to be shared with ‘business’ people.

  • If the process models are going to be shared with business people, e.g. process participants, process owners or sponsors, I would tend to use BPMN,
  • If the people who are doing the process modeling are coming from a business background, e.g. process analysts or business analysts, I would tend to use BPMN,
  • If they were coming from an IT background, e.g. developers or architects, I would tend to use BPEL,
  • If the people who are going to be doing the modeling have extensive skills and experience in one language, I would probably be inclined to use that language, unless there was a good reason to introduce the other.

The ‘type of process’ approach

This approach uses a simple rule of thumb: If the process involves ‘people’ or ‘paper,’ I would lean towards BPMN. If it involves systems or applications integration, I would lean towards BPEL. That is a pretty high level and generic rule of thumb, so there are also some other things I would consider:

  • Generally speaking, I would tend to use BPMN for higher level, more ‘business’-oriented processes and BPEL for lower level, more ‘system’-oriented processes,
  • If the ‘process’ is really an ‘integration’ or a ‘service,’ I would tend to use BPEL.

Layers of business process

The natural result of both of these approaches tends to be a pattern where the higher level processes – the ones that business users interact with – are modeled in BPMN and these in turn call other processes that are also modeled in BPMN which in turn call ‘services’ that are implemented in BPEL. In fact, if you take a look at the Oracle Application Integration Architecture Reference Process Models, you will see that they follow this same pattern (with even higher level models in Value Added Chain diagrams.)

Structure

BPEL is a ‘structured’ language – much like Java is – that means it has ‘control structures’ like sequence (one activity follows another), decisions (called switches), looping (using a ‘while’ loop) and ‘scopes’ which set boundaries for exception handling. Exceptions are handled in a ‘try/catch’ style like many modern programming languages. A scope in BPEL can ‘throw’ and exception to it’s parent scope where it may be handled or ‘rethrown’ to a higher scope still.

As a result of this, BPEL feels very natural to people from a programming background. It has the same kind of logic and control structures that they are used to, and lets them think about problems the way they are accustomed to thinking.

BPMN on the other hand is a ‘directed graph.’ This means that it allows you to arbitrarily move around the process. We often find that real world business processes are able to be modeled directly using directed graphs, that is we don’t need to do a lot of analysis to work out how to structure the process in such a way as to make it ‘fit’ into the language.

Now of course there is a healthy overlap where many of the processes that you could model in BPEL could also be modeled in BPMN and vice versa. However, there are some processes that can be model very simply in BPMN which are quite difficult to model in BPEL. Take for example the following hypothetic ‘flight booking’ process. For whatever reason (probably the way the ‘legacy’ system works) there are only certain points where the customer can go back to an earlier step and, depending on where they are in the process, it is a different point they can return to.

This process can be modeled very simply in BPMN, as shown below, however it would be quite difficult to model in BPEL. It could be done, of course, but it would be necessary to sit down and work out the logic. We would probably need to introduce some kind of ‘state’ variables and use them as ‘guards’ in some large switch construct inside a while loop. It could be done, but we might lose a lot of the clarity that the BPMN model (below) has – that is, it might be harder for us to just look at the model and understand the process logic.

clip_image002

So this is an example of the type of process that is easier to model in BPMN due to its directed graph nature. Many Oracle SOA Suite (BPEL) users may have come across processes that required a bit of work to model in BPEL, so here is one benefit we can see of ‘upgrading’ to Oracle BPM Suite.

Sub-processes

BPMN includes an ‘embedded sub-process’ activity that allows for looping, parallel execution and iterating over members of collections (like arrays.) The embedded sub-process runs in the same instance, so it does not incur the overhead of starting a new process instance.

Embedded sub-processes can be nested and you can choose to execute the iterations sequentially or in parallel. This allows for very elegant modeling of processes that involve looping through collections (and nested collections). The example below shows a BPMN process that processes a set of pathology test series in parallel, each of which may contain multiple individual tests which are processed sequentially, before consolidating the results for review and the possible repeating of some or all tests.

clip_image008

Interruption

Often we have a part of a process that will take some time to execute but which may be cancelled during that time. For example, while fulfilling and order (picking, packing, shipping, etc.,) we may receive an order cancellation from the customer. BPMN includes a concept called a ‘boundary event’ which can be used to model this kind of situation.

The example below demonstrates such a process. The ‘Fulfill Order’ activity is actually an embedded sub-process (shown in its ‘minimised’ form to reduce clutter.) The sub-process has a ‘message boundary event’ attached to it. If the matching message is received at any time while the sub-process is still executing, the sub-process will be interrupted and the exception path (to ‘Cancel Order’) will be followed immediately.

clip_image010

Boundary events can also be attached to individual activities (not just sub-processes) and can handle messages, time-related events and catch errors.

Conditional Flows

In a BPMN process each activity must have exactly one default flow coming out of it (except for the ‘end’ event) but many can also have zero or more conditional flows. A conditional flow is one that will be flowed if and only if the condition attached to it evaluates to true.

Conditions may be expressed using a simple visual editor/expression language, in XPath or may even be a set of rules that are evaluated by the embedded rules engine. Conditional paths can also be named and documented. The name appears on the process model making it easy for non-technical users to understand the process model without needed to learn how to read conditions.

BPMN also provides a rich set of ‘gateways’ that allow for modeling of different kinds of decisions in a process. These include the ability to follow exactly one path, some paths, or all paths and then to join the paths back together when one or all or completed.

Import models from Visio and other tools

Many organisations have created some process documentation using Microsoft Visio and want to be able to reuse that investment.  With the release of Oracle BPM Suite 11.1.1.4, Oracle has added the ability to import process diagrams from Visio, or other tools that can export in XPDL format, into BPM.

Many of these tools allow you to include multiple BPMN ‘pools’ on the same diagram.  The import facility gives you the option of importing the pools as separate process models or combining them into a single process model.  Multi-tabbed diagrams can also be imported, with each tab becoming a separate process model.

The import is tested with a variety of open source and commercial modeling tools which support XPDL export.

Business Catalog

BPM includes a ‘business catalog’ which contains shared artifacts like services, data definitions, business exceptions, event definitions and rules.  The business catalog promotes reuse and collaboration between integration developers and process modelers. It allows you to easily adopt a top-down (starting with the process flow) or bottom-up (starting with the services, data, interface definitions) approach to process modeling.

Process Templates

Both BPEL and BPMN have a ‘template’ mechanism which allows you to define a base process and a number of variations. These mechanisms work slightly differently but both provide a similar kind of capability. The template mechanism for BPMN is more geared to allow ‘business users’ to participate in the definition of variations.

The BPM Project Template mechanism allows business users to customise processes in the Process Composer (web-based modeling environment, more on this later) within certain constraints.  The constraints help to promote communication, governance and control of process customisations.

Templates include selected components like human tasks, services, business objects and of course the process flow.   Business/process analysts can reuse templates to create new processes or to modify existing processes, and can even deploy their customisations directly to the runtime environment without ever touching JDeveloper.

Of course, if you want to enable this capability, you should be careful to ensure that the customisations allowed will not require any additional integration developer work to implement them – that is, you should probably be using a ‘bottom up’ approach where the process analysts create the models from a set of well tested services and other components.

Simulation

Oracle BPM Suite, specifically the ‘design time’ environment in JDeveloper (sometimes called ‘BPM Studio,’) adds the ability to simulate a process before actually implementing and deploying it.

Simulation is the use of a mathematical model to predict how the process will behave in terms of time, cost and resource utilisation. Comparison of simulations with different parameters allows us to make some informed decisions about the design of the process and things like appropriate staffing levels for human tasks which are involved in the process.

When we define a simulation, we provide various parameters including the ‘arrival rate’ (or ‘creation rate’) for new instances, the ‘service time’ for each activity, which resources are required to perform each activity, the capacity of each class of resource (e.g. how many people we have in that role,) the probability that each path out of a decision point will be followed, and so on. Generally, these parameters can be provided as a ‘scalar’ value or a statistical distribution with the appropriate parameters. For example, we may define the arrival rate as a normal distribution with mean 50 and standard deviation 3.

The simulations can be animated on screen (as shown below). The animation shows each packet of work moving through the process. It also makes queuing (bottlenecks) obvious by showing queues develop before activities as instances wait for service. Queues often occur when there are not enough resources available (free) to process the amount of work arriving. Simulation animation provides a simple and effective way to clearly demonstrate bottlenecks in a process to business people like sponsors. It also provides a convenient and simple way to demonstrate the impact of changing something in the process, e.g. adding some more resources or changing the order of some activities in the process.

clip_image003

All of the raw data produced by the simulation engine can be saved and exported for use in other analysis tools, like Excel for example, and to make charts and tables for documents like business cases.  Simulation is another benefit of ‘upgrading’ to Oracle BPM Suite.

Business-friendly process modeling and discovery

Oracle BPM Suite includes a web-based process modeling capability called ‘Process Composer.’ Process Composer allows business users to easily access and review BPMN process models from a web browser without the need to install any special software. In addition to viewing the models, users with appropriate privileges are also able to change models and create new models.

clip_image004

Models can be easily sychronised between the web-based business user-friendly modeling environment and the design time tools used by integration developers who complete the implementation of the process models and prepare them for deployment.

‘Process discovery’ is a vitally important aspect of a successful BPM project.  Often, people may assume that process discovery means detailed workflow modeling of a process.  While the detailed workflow is important, it is just one part of process discovery – and not even the most essential one.

The most important things you need to understand about your processes during discovery are the key activities, milestones, responsibilities, resource requirements, problems affecting performance and the key goals and measures of the process.

Every time I have sat down with a group of business people and modeled a business process it has been abundantly apparent that the stakeholders do not have a common, agreed understanding of the process.  Usually I find that people like senior management, executives and process owners have a better understanding of the goals and measures, and how the process interacts with other processes in other parts of the business.  However, it is the process participants, the people who actually carry out the process on a day to day basis, who have a much better understanding of how things are actually done, often why they are done that way, and what the problems are.

In order for a BPM project to be successful, it is essential that you consult all the relevant stakeholders and that you drive towards a consensus.  This is the essence of what we call ‘process discovery’ and the business-friendly web-based modeling capability provided in Process Composer is a key enabler of the clear communication necessary to make this a reality.

Remember that automating garbage just gives you automated garbage.  The process models that are produced through discovery and consensus are not just documentation.  They are the actual requirements that get handed over to integration developers.  They represent well considered, agreed and tested (through simulation) requirements for a business process.  Process discovery not only helps to get better quality requirements, but it also reduces rework (removing some of the dependency on interpretation) and improves communication across the board.

Activity Guides

For many human-centric processes, conventional ‘work lists’ and BPMN diagrams are not the most intuitive way to present tasks and progress through the process to business users.  The ‘work list’ metaphor can make it difficult for users to understand where they are in the overall end-to-end flow of the process.  Without this context it can be difficult for them to give customers or constituents advice about the progress, next steps and expected completion time for the process.

To address this, Oracle created the notion of ‘guided business processes,’ in which process designers define milestones in the BPMN process model and users interact with the process through an alternative user interface called an ‘activity guide’ that tracks progress against those milestones.

image

The diagram above shows an example of an activity guide for a ‘new hire’ process.  Activity guides can have quite rich user interfaces and provide a lot more context to the user.

Process-oriented collaboration

Oracle BPM Suite provides the ability for business users to easily create a team space to facilitate collaboration around either a process or even a specific instance of a process. These ‘process spaces’ can be created with the click of button, in just a few moments, without any need for assistance from IT staff.

The self-service provisioned process spaces are built from templates which can be easily customized to suit your needs and give business users access to information about the process/instance and collaboration tools like presence awareness, instant messaging, email, shared document libraries, threaded discussion forums, lists and shared calendars.

Process spaces, like the example shown below, are a simple and cost effective way of facilitating collaboration amongst communities of interest or project teams. Because they are all stored on the central server, they are easy to manage, backup and search. And the environment will integrate easily with existing directories like LDAP and Active Directory.

clip_image006

Process Analytics

Oracle BPM Suite includes additional support for automatically generated analytics and dashboards, above and beyond the ‘Monitor Express’ dashboards you may be familiar with. Business users can easily create these dashboards. They can add various charts to display information about the performance of processes they care about.

Business/process analysts can include‘business indicators’ in their process model.  These can be used to count how often an activity occurs, take note of the value of some instance data, or measure the time between points in the process.

From the business indicators that process modelers include in their models, BPM will automatically create ‘process cubes’ which are star schemas containing ‘real time’ data about the process performance and support OLAP-style reporting and business intelligence using the defined dimensions and measures.

BPM provides a rich set of pre-defined ‘out of the box’ dashboards that can be automatically generated with just a couple of clicks.  The diagram below highlights a business indicator on the process model and a dashboard.  Additionally, you can easily use a more comprehensive and powerful business intelligence tool, like Oracle Business Intelligence or a third party tool, against the process cubes.

image

Of course, because BPMN processes are part of the SCA composite, just like BPEL processes, you can also send data out to Oracle Business Activity Monitoring if you have a need to monitor larger numbers of processes, include data from other sources as well, and/or create more complex dashboards for larger user communities.  Pre-defined dashboards are also available in BAM.  We will look at BAM in more detail later in this article.

Organisation modeling

BPMN processes are modeled in ‘swim lanes.’ Swim lanes represent participants’ roles in the process. They provide a clear visual representation of who carries out each activity in the process. The roles you define in your process model can then be easily mapped to users or groups in your corporate directory using either static or dynamic membership rules.

‘Business calendars’ can also be defined so that the process engine can understand when people in various roles will be unavailable due to holidays and operating hours. This allows expiration and escalation times specified on activities to be measure in ‘business hours’ rather than arbitrary ‘wall clock’ time which may produce incorrect results around holidays and weekends. It also allows for handling of process participants in different time zones or shift workers.

Built on a solid foundation

Oracle BPM Suite can be thought of as a layer on top of Oracle SOA Suite – it adds new capabilities, including those discussed already, but it also makes extensive use of the same core components that you would use when building a BPEL process. In fact, there is only actually one process engine which can run both BPEL and BPMN processes.

A key strength of Oracle BPM Suite is the extensive integration capabilities that it inherits from the very solid . Let’s take a tour through some of the other similarities to discover the depth of integration.

Oracle SOA Suite, and by extension Oracle BPM Suite, is based on the Service Component Architecture (SCA) standard which provides a language independent way of assembling ‘service components’ to create a ‘composite application.’ The composite is the unit which can be built, deployed, tested and managed. It is built using an assembly diagram like the one shown in the diagram below.

clip_image012

The ‘service components’ can have various implementation styles. They may be BPEL processes, BPMN processes, rules, mediators, human tasks and so on. You can also see references to external components on the right hand side of the composite diagram. These are the various services that are used (consumed) by this composite. They are often provided by JCA adapters or are web services. The lines (called ‘wires’) between the service components indicate usage not sequence.

Test suites can be defined at the composite level. Test suites are made up of test cases. Test cases can provide simulated inputs and check for the outcome of the composite’s processing. Services can be simulated if necessary, for example if they do not exist yet or if they do not have dedicated accounts or instances available for testing.

Within a composite you can freely mix and match processes that are modeled in BPMN and BPEL. Each can call the other as a sub-process or service.

BPEL and BPMN processes are both first-class citizens in a composite. Both can be exposed as a web service or using other binding styles, both can create and consume human tasks, both can call (consume) business rules, both can use JCA adapters to integrate with external systems, both have synchronous and asynchronous invocation styles.

Both are monitored and managed in exactly the same way. Both use the ‘execution context ID (ECID)’ for instance tracking. This allows you to view details of an instance of a composite and drill down through the instance to see all of the service components involved, regardless of implementation style. You can even drill right down to view the messages sent between them and variables updated in each activity in a process. The diagram below shows an example of drilling down into an instance of a composite and then into a service component in that composite that happens to be a BPMN process. You can see the green highlighting on the process model that tells us where the execution of the process instance is currently.

clip_image014

Both BPEL and BPMN processes can be secured and have logging and auditing policies applied to them using Oracle Web Services Manager, the component of Oracle SOA Suite that is responsible for policy-based management and security of composites.

Oracle Business Activity Monitoring (BAM) is a component of Oracle SOA Suite (and therefore Oracle BPM Suite also) that allows you to create comprehensive dashboards for reporting which are updated in ‘real time.’

BAM is different to the process analytics mentioned earlier in a few key aspects:

  • BAM dashboards can take input from many sources, not just performance metrics attached to processes,
  • They are automatically updated in ‘real time’ by a ‘push’-based update mechanism, i.e. the user does not need to ‘refresh’ them,
  • They can show consolidated metrics across a number of processes, services or other data sources,
  • You can define thresholds and alerts, and
  • You can display data using time series.

clip_image016

Oracle SOA Suite also includes a business to business engine called Oracle B2B that supports many common B2B protocols like AS2, EDI and RosettaNet for example. It handles issues like authentication, guaranteed delivery and non-repudiation in the business to business messaging context. B2B integrations manifest as adapter references in a composite and can be wired to BPEL and BPMN processes equally.

JDeveloper provides more and less technical views of process diagrams for both BPEL and BPMN. The less technical view is called a ‘blueprint.’ These can be used to facilitate exchange of models with other process modeling tools.

Footnote

If you are reading this in early 2011, just after the release of 11.1.1.4, then there are a couple of things that may currently be slightly easier to model in BPEL. If you have a process needs these kinds of capabilities, you might want to consider modeling it in BPEL.

The first is compensation. Compensation is the issuing of ‘reversal’ transactions to undo work that was previously done and committed. A business process can run for a long time (hours, days, even weeks) – far too long to hold a transaction open. BPEL has excellent support for compensation built in to the language and it is easy to model compensation in your processes. This also means that the process engine will know when it is running forwards through a process and when it is compensating.

It is of course possible to build compensation logic into a BPMN process, though the directed graph nature of BPMN can make compensation a little more complicated to define because there are potentially many more cases that you need to cater for.  It is perhaps better to model those parts of your overall business process that may need to be compensated in BPEL and call those BPEL processes from your overall BPMN business level process.

Correlation is another consideration. Correlation is the ability for a process which calls a service asynchronously to identify the corresponding response (‘callback’) from that service. This is especially important in loops or parallel execution, or when many instances of a process will be running concurrently. BPEL provides native correlation set support in the language, which allows you to define the keys to use to identify the correct response. BPMN provides correlation through its support of WS-Addressing correlation.

Update: These considerations are moot now that the BPM 11.1.1.5 ‘Feature Pack’ has been released.  See this post for more information.

Acknowledgements

A big thank you to Manoj Das, Robert Patrick, Dave Shaffer and Meera Srinivasan for their time and their most helpful input and suggestions.

Claudio Ivaldi created a presentation using information in this post, and it is available from here.

Posted in Uncategorized | Tagged , , , , , | 8 Comments

Improving XQuery performance

XQuery transformations are often used in pipelines in Oracle Service Bus to perform data transformation.  Performance of XQuery transformations is a key area to focus on when performance tuning an Oracle Service Bus environment.

During performance testing of your project you should use the activity timing metrics to identify any XQuery performance issues and optimize those queries for better performance.  The recommended approach is to measure the average time taken to run each query, then multiply this by the number of times you expect that query to be used over some defined time period, e.g. one hour.  The results should be sorted from largest to smallest.  You should start at the top of the list and optimize as many of the queries as necessary to obtain the necessary performance.  You should only optimize the queries if there is likely to be a significant performance gain.  Optimization of the queries should be done by looking for opportunities to reorder or modify the statements in order to obtain faster execution.

XQuery performance should be tested with large payloads whenever possible, or at least with many invocations of the same transformation and results averaged.

Some general quidelines for improving the performance of XQuery transformations are as follows:

  • Avoid the use of double slashes (“//”) at the beginning of XPath expressions.  They should only be used if the exact location of a node is not known at design time.  Use of “//” will force the entire payload to be read and parsed.
  • Index XPath expressions where applicable.  For example, if you know that there is only one “Order” and only one “Address” then using an XPath expression like “$body/Order[1]/Address[1]” instead of “$body/Order/Address” will minimize the amount of the payload that needs to be parsed.  Do not use this approach if the expected return value is a list of nodes.
  • Extract frequently used parts of a large XML document as intermediate variables.  This will consume more memory, but will reduce redundant XPath processing.  For example:
    let $customer := $body/Order[1]/CustomerInfo[1]
    $customer/ID, $customer/Status
Posted in Uncategorized | Tagged , , , | Leave a comment

Visualising Garbage Collection in the JVM

Recently, I have been working with a number of customers on JVM tuning exercises.  It seems that there is not widespread knowledge amongst developers and administrators about how garbage collection works, and how the JVM uses memory.  So, I decided to write a very basic introduction and an example that will let you see it happening in real time!  This post does not try to cover everything about garbage collection or JVM tuning – that is a huge area, and there are some great resources on the web already, only a Google away.

This post is about the HotSpot JVM – that’s the ‘normal’ JVM from Oracle (previously Sun).  It is the one you would most likely use on Windows.  If you are using a Linux variant that errs on the side of free software (like Ubuntu), you might have an open source JVM.  Or if your JVM came with another product, like WebLogic, you may even have the JRockit JVM from Oracle (formerly BEA).  And then there are other JVMs from IBM, Apple and others.  Most of these other JVMs work in a similar way to HotSpot, with the notable exception of JRockit, which handles memory differently, and does not have a separate Permanent Generation (see below) for example.

First, let’s take a look at the way the JVM uses memory.  There are two main areas of memory in the JVM – the ‘Heap’ and the ‘Permanent Generation.’  In the diagram below, the permanent generation is shown in green.  The remainder (to the left) is the heap.

The Permanent Generation

The permanent generation is used only by the JVM itself, to keep data that it requires.  You cannot place any data in the permanent generation.  One of the things the JVM uses this space for is keeping metadata about the objects you create.  So every time you create an object, the JVM will store some information in the permanent generation.  So the more objects you create, the more room you need in the permanent generation.

The size of the permanent generation is controlled by two JVM parameters. -XX:PermSize sets the minimum, or initial, size of the permanent generation, and -XX:MaxPermSize sets the maximum size.  When running large Java applications, we often set these two to the same value, so that the permanent generation will be created at its maximum size initially.  This can improve performance because resizing the permanent generation is an expensive (time consuming) operation.  If you set these two parameters to the same size, you can avoid a lot of extra work in the JVM to figure out if it needs to resize, and actually performing resizes of, the permanent generation.

The Heap

The heap is the main area of memory.  This is where all of your objects will be stored.  The heap is further divided into the ‘Old Generation’ and the ‘New Generation.’  The new generation in turn is divided into ‘Eden’ and two ‘Survivor’ spaces.

This size of the heap is also controlled by JVM paramaters.  You can see on the diagram above the heap size is -Xms at minimum and -Xmx at maximum.  Additional parameters control the sizes of the various parts of the heap.  We will see one of those later on, the others are beyond the scope of this post.

When you create an object, e.g. when you say byte[] data = new byte[1024], that object is created in the area called Eden.  New objects are created in Eden.  In addition to the data for the byte array, there will also be a reference (pointer) for ‘data.’

The following explanation has been simplified for the purposes of this post.  When you want to create a new object, and there is not enough room left in eden, the JVM will perform ‘garbage collection.’  This means that it will look for any objects in memory that are no longer needed and get rid of them.

Garbage collection is great!  If you have ever programmed in a language like C or Objective-C, you will know that managing memory yourself is somewhat tedious and error prone.  Having the JVM automatically find unused objects and get rid of them for you makes writing code much simpler and saves a lot of time debugging.  If you have never used a language that does not have garbage collection – you might want to go write a C program – it will certainly help you to appreciate what you are getting from your language for free!

There are in fact a number of different algorithms that the JVM may use to do garbage collection.  You can control which algorithms are used by changing the JVM paramaters.

Let’s take a look at an example.  Suppose we do the following:

String a = "hello";
String b = "apple";
String c = "banana";
String d = "apricot";
String e = "pear";
//
// do some other things
//
a = null;
b = null;
c = null;
e = null;

This will cause five objects to be created, or ‘allocated,’ in eden, as shown by the five yellow boxes in the diagram below.  After we have done ‘some other things,’ we free a, b, c and e – by setting the references to null.  Assuming there are no other references to these objects, they will now be unused.  They are shown in red in the second diagram.  We are still using String d, it is shown in green.

If we try to allocate another object, the JVM will find that eden is full, and that it needs to perform garbage collection.  The most simple garbage collection algorithm is called ‘Copy Collection.’  It works as shown in the diagram above.  In the first phase (‘Mark’) it will mark (illustrated by red colour) the unused objects.  In the second phase (‘Copy’) it will copy the objects we still need (i.e. d) into a ‘survivor’ space – the little box on the right.  There are two survivor spaces and they are smaller than eden in size.  Now that all the objects we want to keep are safe in the survivor space, it can simply delete everything in eden, and it is done.

This kind of garbage collection creates something known as a ‘stop the world’ pause.  While the garbage collection is running, all other threads in the JVM are paused.  This is necessary so that no thread tries to change memory after we have copied it, which would cause us to lose the change.  This is not a big problem in a small application, but if we have a large application, say with a 8GB heap for example, then it could actually take a significant amount of time to run this algorithm – seconds or even minutes.  Having your application stop for a few minutes every now and then is not suitable for many applications.  That is why other garbage collection algorithms exist and are often used.  Copy Collection works well when there is a relatively large amount of garbage and a small amount of used objects.

In this post, we will just discuss two of the commonly used algorithms.  For those who are interested, there is plenty of information available online and several good books if you want to know more!

The second garbage collection algorithm we will look at is called ‘Mark-Sweep-Compact Collection.’  This algorithm uses three phases.  In the first phase (‘Mark’), it marks the unused objects, shown below in red.  In the second phase (‘Sweep’), it deletes those objects from memory.  Notice the empty slots in the diagram below.  Then in the final phase (‘Compact’), it moves objects to ‘fill up the gaps,’ thus leaving the largest amount of contiguous memory available in case a large object is created.

So far this is all theoretical – let’s take a look at how this actually works with a real application.  Fortunately, the JDK includes a nice visual tool for watching the behaviour of the JVM in ‘real time.’  This tool is called jvisualvm.  You should find it right there in bin directory of your JDK installation.  We will use that a little later, but first, let’s create an application to test.

I used Maven to create the application and manage the builds and dependencies and so on.  You don’t need to use Maven to follow this example.  You can go ahead and type in the commands to compile and run the application if you prefer.

I created a new project using the Maven archetype generate goal:

mvn archetype:generate
  -DarchetypeGroupId=org.apache.maven.archetypes
  -DgroupId=com.redstack
  -DartifactId=memoryTool

I took type 98 – for a simple JAR – and the defaults for everything else.  Next, I changed into my memoryTool directory and edited my pom.xml as shown below.  I just added the part shown in red.  That will allow me to run my application directly from Maven, passing in some memory configuration and garbage collection logging parameters.

<project xmlns="http://maven.apache.org/POM/4.0.0" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.redstack</groupId>
  <artifactId>memoryTool</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>
  <name>memoryTool</name>
  <url>http://maven.apache.org</url>
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  </properties>
  <build>
    <plugins>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>2.0.2</version>
        <configuration>
          <source>1.6</source>
          <target>1.6</target>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>exec-maven-plugin</artifactId>
        <configuration>
          <executable>java</executable>
          <arguments>
            <argument>-Xms512m</argument>
            <argument>-Xmx512m</argument>
            <argument>-XX:NewRatio=3</argument>
            <argument>-XX:+PrintGCTimeStamps</argument>
            <argument>-XX:+PrintGCDetails</argument>
            <argument>-Xloggc:gc.log</argument>
            <argument>-classpath</argument>
            <classpath/>
            <argument>com.redstack.App</argument>
          </arguments>
        </configuration>
      </plugin>
    </plugins>
  </build>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

If you prefer not to use Maven, you can start the application using the following command:

java -Xms512m -Xmx512m -XX:NewRatio=3 
  -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
  -Xloggc:gc.log -classpath <whatever>
  com.redstack.App

The switches are telling the JVM the following:

  • -Xms sets the initial/minimum heap size to 512 MB
  • -Xmx sets the maximum heap size to 512 MB
  • -XX:NewRatio sets the size of the old generation to three times the size of the new generation
  • -XX:+PrintGCTimeStamps, -XX:+PrintGCDetails and -Xloggc:gc.log cause the JVM to print out additional information about garbage collection into a file call gc.log
  • -classpath tells the JVM where to look for your program
  • com.redstack.App is the name of the main class to execute

I have chosen these options so that you can see pretty clearly what is going on and you wont need to spend all day creating objects to make something happen!

Here is the code in that main class.  This is a simple program that will allow us to create objects and throw them away easily, so we can understand how much memory we are using, and watch what the JVM does with it.

package com.redstack;

import java.io.*;
import java.util.*;

public class App {

  private static List objects = new ArrayList();
  private static boolean cont = true;
  private static String input;
  private static BufferedReader in = new BufferedReader(new InputStreamReader(System.in));

  public static void main(String[] args) throws Exception {
    System.out.println("Welcome to Memory Tool!");

    while (cont) {
      System.out.println(
        "\n\nI have " + objects.size() + " objects in use, about " +
        (objects.size() * 10) + " MB." +
        "\nWhat would you like me to do?\n" +
        "1. Create some objects\n" +
        "2. Remove some objects\n" +
        "0. Quit");
      input = in.readLine();
      if ((input != null) && (input.length() >= 1)) {
        if (input.startsWith("0")) cont = false;
        if (input.startsWith("1")) createObjects();
        if (input.startsWith("2")) removeObjects();
      }
    }

    System.out.println("Bye!");
  }

  private static void createObjects() {
    System.out.println("Creating objects...");
    for (int i = 0; i < 2; i++) {
       objects.add(new byte[10*1024*1024]);
     }
   }

   private static void removeObjects() {
     System.out.println("Removing objects...");
     int start = objects.size() - 1;
     int end = start - 2;
     for (int i = start; ((i >= 0) && (i > end)); i--) {
      objects.remove(i);
    }
  }
}

If you are using Maven, you can build, package and execute this code using the following command:

mvn package exec:exec

Once you have this compiled and ready to go, start it up, and fire up jvisualvm as well.  You might like to arrange your screen so you can see both, as shown in the image below.  If you have never used JVisualVM before, you will need to install the VisualGC plugin.  Select Plugins from the Tools menu.  Open the Available Plugins tab.  Place a tick next to the entry for Visual GC.  Then click on the Install button.  You may need to restart it.

Back in the main panel, you should see a lit of JVM processes.  Double click on the one running your application, com.redstack.App in this example, and then open the Visual GC tab.  You should see something like what is shown below.

Notice that you can visually see the permanent generation, the old generation and eden and the two survivor spaces (S0 and S1).  The coloured bars indicate memory in use.  On the right hand side, you can also see a historical view that shows you when the JVM spent time performing garbage collections, and the amount of memory used in each space over time.

In your application window, start creating some objects (by selecting option 1).  Watch what happens in Visual GC.  Notice how the new objects always get created in eden.  Now throw away some objects (option 2).  You will probably not see anything happen in Visual GC.  That is because the JVM will not clean up that space until a garbage collection is performed.

To make it do a garbage collection, create some more objects until eden is full.  Notice what happens when you do this.  If there is a lot of garbage in eden, you should see the objects in eden move to a survivor space.  However, if eden had little garbage, you will see the objects in eden move to the old generation.  This happens when the objects you need to keep are bigger than the survivor space.

Notice as well that the permanent generation grows slowly as you create new objects.

Try almost filling eden, don’t fill it completely, then throw away almost all of your objects – just keep 20MB.  This will mean that eden is mostly full of garbage.  Then create some more objects.  This time you should see the objects in eden move into the survivor space.

Now, let’s see what happens when we run out of memory.  Keep creating objects until you have around 460MB.  Notice that both eden and the old generation are nearly full.  Create a few more objects.  When there is no more space left, your application will crash and you will get an OutOfMemoryException.  You might have got those before and wondered what causes them – especially if you have a lot more physical memory on your machine, you may have wondered how you could possibly be ‘out of memory’ – now you know!  If you happen to fill up your permanent generation (which will be pretty difficult to do in this example) you would get a different exception telling you PermGen was full.

Finally, another way to look at this data is in that garbage collection log we asked for.  Here are the first few lines from one run on my machine:

13.373: [GC 13.373: [ParNew: 96871K->11646K(118016K), 0.1215535 secs] 96871K->73088K(511232K), 0.1216535 secs] [Times
: user=0.11 sys=0.07, real=0.12 secs]
16.267: [GC 16.267: [ParNew: 111290K->11461K(118016K), 0.1581621 secs] 172732K->166597K(511232K), 0.1582428 secs] [Ti
mes: user=0.16 sys=0.08, real=0.16 secs]
19.177: [GC 19.177: [ParNew: 107162K->10546K(118016K), 0.1494799 secs] 262297K->257845K(511232K), 0.1495659 secs] [Ti
mes: user=0.15 sys=0.07, real=0.15 secs]
19.331: [GC [1 CMS-initial-mark: 247299K(393216K)] 268085K(511232K), 0.0007000 secs] [Times: user=0.00 sys=0.00, real
=0.00 secs]
19.332: [CMS-concurrent-mark-start]
19.355: [CMS-concurrent-mark: 0.023/0.023 secs] [Times: user=0.01 sys=0.01, real=0.02 secs]
19.355: [CMS-concurrent-preclean-start]
19.356: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
19.356: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 24.417: [CMS-concurrent-abortable-preclean: 0.050/5.061 secs] [Times: user=0.10 sys=
0.01, real=5.06 secs]
24.417: [GC[YG occupancy: 23579 K (118016 K)]24.417: [Rescan (parallel) , 0.0015049 secs]24.419: [weak refs processin
g, 0.0000064 secs] [1 CMS-remark: 247299K(393216K)] 270878K(511232K), 0.0016149 secs] [Times: user=0.00 sys=0.00, rea
l=0.00 secs]
24.419: [CMS-concurrent-sweep-start]
24.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.420: [CMS-concurrent-reset-start]
24.422: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.711: [GC [1 CMS-initial-mark: 247298K(393216K)] 291358K(511232K), 0.0017944 secs] [Times: user=0.00 sys=0.00, real
=0.01 secs]
24.713: [CMS-concurrent-mark-start]
24.755: [CMS-concurrent-mark: 0.040/0.043 secs] [Times: user=0.08 sys=0.00, real=0.04 secs]
24.755: [CMS-concurrent-preclean-start]
24.756: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
24.756: [CMS-concurrent-abortable-preclean-start]
25.882: [GC 25.882: [ParNew: 105499K->10319K(118016K), 0.1209086 secs] 352798K->329314K(511232K), 0.1209842 secs] [Ti
mes: user=0.12 sys=0.06, real=0.12 secs]
26.711: [CMS-concurrent-abortable-preclean: 0.018/1.955 secs] [Times: user=0.22 sys=0.06, real=1.95 secs]
26.711: [GC[YG occupancy: 72983 K (118016 K)]26.711: [Rescan (parallel) , 0.0008802 secs]26.712: [weak refs processin
g, 0.0000046 secs] [1 CMS-remark: 318994K(393216K)] 391978K(511232K), 0.0009480 secs] [Times: user=0.00 sys=0.00, rea
l=0.01 secs]

You can see from this log what was happening in the JVM.  Notice it shows that the Concurrent Mark Sweep Compact Collection algorithm (it calls it CMS) was being used.  You can see when the different phases ran.  Also, near the bottom notice it is showing us the ‘YG’ (young generation) occupancy.

You can leave those same three settings on in production environments to produce this log.  There are even some tools available that will read these logs and show you what was happening visually.

Well, that was a short, and by no means exhaustive, introduction to some of the basic theory and practice of JVM garbage collection.  Hopefully the example application helped you to clearly visualise what happens inside the JVM as your applications run.

Thanks to Rupesh Ramachandran who taught me many of the things I know about JVM tuning and garbage collection.

Posted in Uncategorized | Tagged , , , , | 16 Comments