Correlation in BPM

Arun Pareek has a very detailed post about the new correlation capabilities introduced in the BPM 11.1.1.5 ‘Feature Pack’ patch – I recommend taking a look.

Posted in Uncategorized | Tagged , | Leave a comment

Building a job dispatcher in BPM -or- Iterating over collections in BPM

Recently I was working with a customer who wanted to create a kind of ‘job dispatcher’ in BPM – basically a process that could take an array of things as input and do something different to each thing, depending on what type of thing it is.

This seems like a fairly common pattern, so I thought I would make it generic and share it here.

This example is built on, and requires JDeveloper 11.1.1.5 and BPM 11.1.1.5 with patch 12413651 (also known as the ‘Feature Pack’) applied to both.  This will not work with earlier versions or without that patch applied.

Let’s start by creating a new BPM Application by selecting New from the Application menu.  Give your application a name, I called mine JobDispatcher, and select the BPM Application template as shown below.  Then click on Next.

Give your project a name, I used the same name, then click on Finish.

After a moment, the Create BPMN Process wizard will open.  Give your process a name.  I called mine DispatchJobs and choose the Asynchronous Service type as shown below.  Then click on Finish.

Before we fill out the details of the process, let’s create some data types we will need.  Select New from the File menu to open the New Gallery.  Click on the tab for All Technologies and selct the XML category on the left hand side, then XML Schema on the right hand side, and click on OK.

Enter a name for your schema, I called mine jobs.xsd.  Store it in the xsd directory inside your project directory, as shown below, and add a meaningful suffix to the Target Namespace, I added jobs.  It is a good idea to always customise the target namespace to prevent namespace clashes later on.  Click on OK.

When the editor opens, you want to set up your schema like the image below.  A fast way to do this is to switch to the source view using the tab at the bottom of the editor and paste in the XSD code (below).  You can then switch back to design view.

Here is the XSD code:

<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns="http://www.example.org/jobs"
            targetNamespace="http://www.example.org/jobs"
            elementFormDefault="qualified">
  <xsd:element name="data" type="jobsType">
    <xsd:annotation>
      <xsd:documentation>
        A sample element
      </xsd:documentation>
    </xsd:annotation>
  </xsd:element>
  <xsd:complexType name="jobType">
    <xsd:sequence>
      <xsd:element name="jobType" type="xsd:string"/>
      <xsd:element name="otherData" type="xsd:string"/>
    </xsd:sequence>
  </xsd:complexType>
  <xsd:complexType name="jobsType">
    <xsd:sequence>
      <xsd:element name="job" type="jobType" maxOccurs="unbounded"/>
    </xsd:sequence>
  </xsd:complexType>
</xsd:schema>

This allows us to specify any number of jobs as the input to our process – we will see later how we use this data type to create the inputs for the process.  Notice that we have an arbitrary ‘otherData‘ attribute in our job.  In real life, you would obviously have something a bit more interesting.

Now let’s set up our process data objects.  Open the BPM Project Navigator from the tab or the View menu, then expand out your project and the Business Catalog as shown below.  Right click on the Business Catalog and select Module form the New menu.  Create a new Module named Data to hold your definitions.

Now right click on that Data folder that you just created and select Business Object from the New menu.

Create a new business object called CollectionOfJobs in the Data folder.  Select the checkbox next to Based on External Schema and then click on the ellipses (…) button find the schema.  Locate the JobsType that we defined in our XSD earlier and press OK and then OK again.

Create another business object called SingleJob of type JobType  in the Data folder using the same steps.

Now we want to create a data object in our process itself.  Open your process in the main editor window, or return to it.  Click somewhere in the process editor to make it active.  The Structure view should be visible in the lower left corner.  If it is not, you can open it from the View menu.  Right click on Process Data Objects – that’s Process not Project, be careful.  Then select New from the popup menu.

Create a new data object called theJobs with type <Component> as shown below.  Click on the litthe magnifying glass icon and select the CollectionOfJobs business object in the Data folder that we just created a moment ago.  Then click on OK.

We will use this data object to store the actual data inside the process instance.

Now, let’s get to work on the process.  Open the Component Palette by click on its tab, or from the View menu.  Expand out the Activities section and drag a Subprocess activity into your process model and hold it over the line between the Start and End activities.  Notice that the line turns blue when you hold another activity over the centre of it.  Drop the Subprocess activity when the line is blue.  This will add it in the middle of that line.  If yhou were to drop it when the line were not blue, it would just be placed on the editor canvas but not connected into the process.  Of course you can do this and add the arrows yourself if you prefer.

Make sure you get the Subprocess activity, not the Event Subprocess as these do different things.

Double click on the gray subprocess area, or right click on it and select Properties.  This will open the properties editor for the subprocess.  The subprocess activity is pretty versatile and it can be used in a variety of ways.  Here we want to kick off some uknown number of subprocesses based on the data type, in parallel, so we want to go to the Loop Characteristics tab and make the following selections.

First, set the Loop Characteristics to MultiInstance.  Next, set the Mode to Parallel.

In the Creation Type, select Collection.  This lets us iterate over a collection.  You could of course find the size of the collection and then use Cardinality and array indexes if you are that way inclined.

Side note:  The difference between cardinality and collection is a bit subtle.  Here are some Java examples that are conceptually equivalent:

List someData = getTheDataFromSomewhere();

// cardinality works like this
for (int i = 0; i <= someData.size(); i++) {
  doSomethingTo(someData[i]);
}

// collection works like this
for (Object data : someData) {
  doSomethingTo(data);
}

Now click on the little pencil icon for Loop Data Input and create an argument of type <Array> as shown below.  Select <Component> as the Element Type and then click on the little magnifying glass icon and select the SingleJob type in your Data folder.  This will create an argument which is an array of SingleJob‘s.

In the Expression field for the Loop Data Input click on the little calculator icon and then pick XSLT Expression and add theJobs.job from the lower left pane into the expression (or just go ahead and type it in).

Then repeat these steps to create the Loop Data Output as well.  When you are done you should see something like the following image:  Go ahead and reward yourself by clicking on OK 🙂

Now, stretch out your subprocess so we have a bit of room to put some more activities inside there.

Open up the Gateways section in the Component Palette and drag an Exclusive Gateway out into the process, drop it on the line between the Start1 and End1 activities in the subprocess.  Also drag out two Activity activites from the Activities section and drop them into the subproces as shown.  Call them JobA and JobB as shown below.

Go ahead and connect up your process as shown in the two diagrams below.  You can create the connections by right clicking on the first of the two activities you want to join and selecting Add default sequence flow or Add conditional sequence flow as appropriate.  Then click on the activity you want to connect it to.

Right click on each of the two Activity activities and select the option to Mark Node as Draft.  This will allow us to go ahead and deploy and execute our process before we actually define what these two will actually do.  That is fine for this example.  In real life, you probably want to have them call another process to handle the job, a different process for each type of job.  Look for our upcoming article on correlation for some details on how to do this 🙂 I will add a link here when I post it.

You should now see something like the following image.  Right click on the line leading into JobA and select Properties.  Go to the Properties tab and in the Expression field, enter the following expression, as shown below:

inputDataItem.jobType == "A"

Do the same for the line leading to JobB but change it to look for jobType B.

The third line (marked with the little cross at the start) is the unconditional path.  It will be followed if none of the conditions evaluate to true.

Finally, let’s add our input data for the process.  Right click on the Start activity at the beginning of the process (not the one in the subprocess) and select Properties.  Go to the Implementation tab and click on the green plus icon to add an argument.  You can leave the name as argument1.  Set the Type to <Component> and select your CollectionOfJobs component as shown below.  Then click on OK in the Create Argument window.

Click on the Data Associations link and add a line to copy the data in argument1 into our process data object, theJobs, as shown below.  Then click on OK and then OK again.

Now we are ready to deploy and test our job dispatcher.  Go back to the Application navigator.  You can open it from the View menu if you don’t see it.  Right click on your project root folder and select JobDispatcher… from the Deploy option in the popup menu, as shown below.

Go ahead and deploy it on your test BPM server.  If you don’t know how to do this, it has been covered many times in earlier posts, or you should be able to work it out, its not too hard.

Now log on to Enterprise Manager (again how to do this is covered in earlier posts) and navigate to your newly deployed composite inside soa-infra as shown below.  Open the composite and click on the Test button.

The test page will open.   Notice that you can specify how large you want the array to be.  Type in 3 and click on the little refresh icon highlighted below.

The page will refresh and you now have room to enter three jobs.  Enter some test data as shown below.  One should have jobType  set to A, one to B, and the other to something else.  The otherData can be anything you like.  When you have all of your test data ready, click on the Test Web Service button (top right).

This will start an instance of your process.  Click on the Launch Flow Trace button to see what happened 🙂  Note: make sure your browser popup blocker does not prevent the flow trace window from opening.

In the flow trace window, click on the DispatchJobs process as shown below.

Exapand out the audit trail to see what happened.  You can see here (look at the arrows) that the first job went through and was ‘processed’ by JobA as we expected, the second went to JobB and the third just went straight through and was not processed.  This is exactly what we expected.

So there you have a simple job dispatcher implemented in BPM.  Look for some follow up articles on correlation and scheduling which can be combined with this job scheduler to do some really interesting stuff.  Enjoy!

Posted in Uncategorized | Tagged , , , | 5 Comments

API documentation for BPM APIs is now available

You can now download the documentation for the BPM APIs from here.  This includes both the Human Workflow APIs and the BPM APIs.  The BPM APIs were previously undocumented.  I have shown how to use some of them in the custom worklist sample, but now you can read about all of them at your leisure!

Posted in Uncategorized | Tagged , | 2 Comments

See you at OpenWorld

Oracle OpenWorld 2011 is almost upon us, and we will be there.  We hope to meet some of you there!

Here are the sessions that we will be participating in, come along and say hello!

Session ID: 02523
Session Title: Continuous Integration for SOA and Business Process Management Projects
Venue / Room: Marriott Marquis – Salon 7
Date and Time: 10/5/11, 10:15 – 11:15
This session will feature Jim Clark, the architect of the build system for Fusion Applications, who will share some details on how the build and test of Fusion Applications was automated using Continuous Integration techniques.

Session ID: 12382
Session Title: Extending Oracle E-Business Suite with Oracle ADF and Oracle SOA Suite
Venue / Room: Moscone West – 2016
Date and Time: 10/6/11, 10:30 – 11:30

Session ID: 30500
Session Title: Extending Oracle E-Business Suite Processes with Oracle Business Process Management Suite
Venue / Room: Marriott Marquis – Salon 1/2
Date and Time: 10/3/11, 11:00 – 12:00

Session ID: 30520
Session Title: Build Mobile Applications for Oracle E-Business Suite
Venue / Room: Marriott Marquis – Salon 1/2
Date and Time: 10/3/11, 12:30 – 13:30

We will also be lurking in the demogrounds.  We hope to see you there!

Posted in Uncategorized | Leave a comment

Great new WebCenter book available from Packt

Recently, I had the opportunity to read Yannick Ongena‘s new book, Oracle WebCenter 11g PS3 Administration Cookbook, from Packt Publishing.  The book takes the form of a collection of ‘recipes’ – instructions on how to carry out common tasks and explanations of how they work.

Those of you who read RedStack regularly will know that I am big believer in leveraging the experience of those who have gone before you, and not reninventing the wheel.  Yannick’s book is a great example of this.  He has captured a whole bunch of knowledge and experience into an easily digestible collection, and it is presented in a way that I really like – clear examples of exactly how to do common everyday tasks.

The book covers a lot of ground, from the basics of creating a portal, consuming portlets, through navigation, look and feel customisation, integration with the various WebCenter services and external application, to management and security of a WebCenter site.

I would highly recommend this book to all WebCenter users.  If you are new to WebCenter, or you are joining a WebCenter project for the first time, Yannick’s book will help you to get over the all important initial learning curve – answering all of those ‘how do I …?’ questions that you are sure to have, and providing a solid basis of understanding of the various capabilities of the product and how to take advantage of them.

This is also a book that you will want to keep on hand as a reference.  You can easily look up clear, concise instructions on how to carry out over a hundred common tasks.  This is not a book to read once and discard.  You will want to keep it in your library to refer to often.

Even if you are familiar with WebCenter, PS3 introduced a lot of new functionality, including most notably the WebCenter Portal.  This book is a great way to keep up to date on the product and understand the new capabilities.

Well done Yannick and thanks for taking the time to share your experience with us all.

Disclosure: Packt Publishing provided me with a free copy of the e-book.

Posted in Uncategorized | Tagged | Leave a comment

BPM PS4 Feature Pack released

The long awaited BPM 11g PS4 Feature Pack has been released.  Jan Kettenis has done a good job of providing the details here and some helpful install notes here.

The Feature Pack adds a lot of exciting new functionality and also includes bug fixes for issues reported since the last release.  Let’s take a quick look at the new features.

Composer – Versioning, Collaboration, Usability

Process Composer gets a major overhaul in this release and adds a lot of new features for collaboration and versioning.  It is also much easier to use and nicer to look at!

Alter Flow (Grab) and Instance Migration

The ‘Grab’ feature in 10g reappears as ‘Alter Flow’ in this release.  This feature allows an adminsitrator or process owner to stop a process instance and arbitrarily move to another point in the process.  They can also modify data in the process, and the restart from the new location.  Many 10g customers have been asking for this feature.

This release also allows you to make changes to a running version of a process.  This addresses a problem where it was hard to fix small errors in your processes – you could not redploy the same version without causing all the running instances to become stale, and if you deployed a new version, then you had no way of migrating those running instances to the new version of the process.  Now, you can redploy a version of the process (as the same version) and keep the running instances, which will ‘migrate’ to the newly deployed version.  If any cannot be migrated automatically, they will be ‘grabbed’ and stopped, and you will need to manually release them.

Correlations and Conversations

The Feature Pack adds full support for correlation and conversation support to BPMN processes, meaning that you can easily implement use cases like calling n instances of a subprocess – one for each element in an array – and consolidate the results.

Parametric Roles – OU based and Org Roles

Another 10g feature that is now appearing in 11g is parametric roles.  This allows you to define process roles based on organisation structure and other information stored in the directory, giving a lot more flexibility in role mapping and supporting use cases like ‘a manager in the Sales department.’

Sticky User and Exclude Users (4-eye)

Two new workflow patterns allow you to allocate human (interactive) tasks to the user who processed previous tasks in the same process instance (sticky user) or to exclude that user to ensure that two different people process two tasks.  This allows you to easily support use cases like ensuring a different person requests and approves purchase orders.

Comments, Attachments, UCM Integration

This release allows you to (optionally) store comments and attachments in Oracle WebCenter Content (formerly known as Oracle Universal Content Management), meaning that you can apply all of the content management practices and capabilities to attachments in processes, like versioning, automatic creation of renditions, records management and so on.

Activity Guide

We now provide an out of the box user interface (in the BPM Workspace) for Activity Guides, which allow process participants to see where they are in a process (in terms of the milestones and tasks, not the process diagram) and what remains to be done.  This makes it a lot easier for people to understand their progress through a process.

Rule Testing and Audit Trail

We have added more capabilities to test Business Rules and also a much more detailed audit trail for rule execution.

Notification and Task Update Activities

New activities have been added to the palette to allow you to update a task status, or send a notification directly from a BPMN process.  This means that you do not need to call an API or hand off this work to a BPEL process as we saw people doing in previous releases.

New Data Association Editor

The Data Association Editor has been greatly improved to make it easier to define data mappings.  It is also almost the same as the one used in BPEL reducing the complexity of learning two slightly different ways of doing essentially the same thing.

Pre-defined Variables

A number of predefined variables are now available, which make it easy to do a number of common process instance manipulation use cases we have been seeing like getting the process instance ID, updating the title of the process instance, etc.

Easier Development – Draft Mode and Log Messages

We have added the ability to include log messages in your process which should make it easier to debug your processes.  Again, this is a 10g feature appearing in 11g for the first time.

We have also added a ‘draft’ capability which allows you to mark an activity as being draft.  This allows you model it using the correct activity type, but the system will no complain that you have not implemented it yet.  This will make it a lot easier to deploy and test processes when you have only implemented part of them.

Round trip Simulation and BI integration

At long last, you can now pull data from BAM back to JDeveloper and use it to run simulations of process models.

We have also added capabilities to automatically generate views and collect data to make integation with external (Oracle or third party) BI tools much simpler.

Process documentation generation

You can now generate process documentation right from the process model and output it in various formats.

Oracle Workflow Import

This release includes some capabilities to import Oracle Workflow processes into Oracle BPM.  Just the activities/flow though, not the whole thing!  Look for more in the future.

 

Well, that’s the highlights!  Look out for more detailed posts on many of these coming soon.

Posted in Uncategorized | Tagged , , , , , , , | 3 Comments

Dave Shaffer joins the RedStack team

I am very happy to welcome Dave Shaffer into our little family of bloggers.  Until recently, Dave has run Product Management for the Integration products at Oracle, i.e. SOA Suite, BPM Suite, etc., and has now established his own consultancy to provide strategic and practical advice to organizations using these products.  Dave has an absolute wealth of knowledge and experience, and we are very happy to enjoy his collaboration on a series of upcoming posts on SOA architecture and best practices.  I wanted to ask Dave some questions about his future, his journey to this point, and his thoughts on SOA and BPM.  I am sure a lot of our readers who know Dave would also be interested to hear his views, so we are publishing our ‘interview’ here.  As Dave reiterates below, we welcome you to join in the conversation, through commenting on the post for example, or by getting in touch using the details provided below.

Mark: As you have been closely connected with the Oracle SOA and BPM space, I’m sure a lot of people are curious what you are doing now that you have left Oracle. Can you share a quick update?

Dave: Sure, first let me provide just a little background for those readers who may not know me well. I came into Oracle in 2004 when Oracle acquired Collaxa, which was where the BPEL Process Manager came from. In the intervening time at Oracle, I led product management for the integration products, including SOA Suite, BPM Suite and the Governance tools. Over the past year or so, we also merged the SOA team with the product mgmt team for UPK (User Productivity Kit), a tool for creating user adoption content for both custom apps and packaged app implementations, came under my area of responsibility.

It has been great to participate in the incredible growth and success for the Oracle SOA and BPM products that we have seen over the past 7 years, but I decided in 2011 to head out on my own. The main driver was to take some time off and travel with my family (my wife and I have two kids, ages 7 and 9) but also by my nature I am drawn to change and the time seemed right to do some consulting and explore some ideas that I have.

To that end, I set up Middleworks as a small company affiliated with other services companies in the Oracle eco-system to help customers be more successful with middleware. Services that Middleworks offers now include strategy consulting for customers and partners in the Oracle eco-system and helping customers execute on their implementation plans and goals. In particular, I think that many customers could use some strategic advice, guidance and coaching from someone like myself who has a lot of knowledge of the Oracle product portfolio and organization but is independent of Oracle itself. Over time, we will see whether this develops into a short-term lifestyle change for me or a more significant entity.

Mark: OK, great. Things have certainly changed a lot since Oracle acquired Collaxa. What was it like in the early days and how has BPEL and SOA matured since then?

Dave: I think the last 10 years have been super interesting for BPM and SOA. Collaxa was founded near the end of 2000 in the midst of the dot-com bust. While it was originally intended to bring long-running and parallel processes and asynchronous service support to languages like Java, the emergence of the BPEL standard in early 2002 provided a unique opportunity to evolve the general purpose Collaxa process engine into the first BPEL execution engine. At that time, it wasn’t obvious whether BPEL was going to supplant the many other process execution languages, like XPDL, BPML, and dozens of others, which made things exciting for a small startup. Collaxa was founded by Edwin Khodabakchian and Albert Tam and though it never grew larger than 12 employees, it became the de facto standard for BPEL execution and was widely adopted in the developer community. One of the most interesting things for such a small company that was trying to land large enterprise customers was that we needed to make Collaxa feel bigger than it was. For example, we made it easy for anyone to get technical support for downloading, installing or using the BPEL engine. Everyone handled these inquiries, including Edwin, a founder and CEO of Collaxa, and myself (at that time I was responsible for all the developer use of our tools, docs and training, etc). Since a company that wants to feel larger doesn’t want their CEO answering first line technical support, and also because we wanted to give a personal feel to our support experience, we created a support persona (“Mark Ghodsian”) which Edwin and I would both respond to support threads as. Given that Edwin would work nearly all night and I would get to work fairly early in the morning, “Mark” was available nearly 24×7 and become incredibly beloved to many customers. I remember showing up at several customers who had a badge printed for Mark and were very disappointed that he wasn’t able to make it to visit them – and even more disappointed to find out that he didn’t exist at all…

Even with the level of support that Mark Ghodsian was able to provide, enterprise customers were hard to come by at that time and the real value of such an engine was much greater to a large enterprise so partnership discussions with Oracle turned into an acquisition that closed in mid-2004. An interesting observation in hindsight is that the other large Collaxa partners at that time included Siebel and Sun, both of whom are now part of Oracle anyway.

One of the other things I recall most clearly from the “early days” was the competition between models such as BPEL that were very developer friendly and those which were more business-friendly, often including BPMN design-time support and execution based on XPDL or something similar. While Business Process Management (BPM) was not new even then, there were questions of how much of the business process modeling, deployment and execution could be done by business alone and how much required IT involvement. At that time, there were many BPM pure-plays who offered the promise of enabling business to own their processes, though the dream of business being able to go it alone without IT was never truly realized. However, pitching that business and IT needed to collaborate was more difficult than selling the dream. We often described the alternative approach as “the magic pill,” comparing it with weight loss schemes. While the tried and true approach to weight loss involves improvements to diet and increased exercise, people are always looking for the “magic pill” that doesn’t require any effort to achieve weight loss. It turns out to be much easier to sell a weight-loss pill which doesn’t work than a diet+exercise program which actually does. So what we found was that we had to tone down the “diet and exercise” style message of BPM to be successful in the market. Personally, I still think this is the case (that people don’t always want to hear the unvarnished truth) but at the same time, I believe that the emergence of standards like BPMN 2.0 and the ability to have a single engine that executes both BPEL and BPMN, such as Oracle has done and other vendors say they are doing, makes it easier for IT and Business to collaborate and maybe lowers the pain and effort required to achieve the end goal – in this case, better business processes, which while not as personally satisfying as weight loss, are just as important to the health of a business.

Mark: A lot of technologies have a bit of a learning curve and sometimes knowing how to use a technology and how to use it well are two different things.  Can you tell us, how important is it for new adopters of SOA Suite to spend some time learning best practices for things like application and environment design, and how to tune for best performance?

Dave: This is certainly something that I have seen repeatedly over the years with initial implementations of any new technology or standards. There are always lots of different ways to apply a technology and they don’t come with “contraindications” as medicine does. Come to think of it, if they did, that might be a good thing. For example, I remember seeing people using BPEL in ways that it shouldn’t have, for example for simple looping constructs like iterating over a large data set or as a scheduler to run indefinitely, say starting a task every night. The issue is that there is nothing in the language or product docs that says you shouldn’t do these things with it, but there are side effects which make it a problematic way to approach these requirements. For example, in the cases above, the audit trail will grow infinitely large and processes become hard to manage when they run for months or years. And of course changing the architecture or implementation approach late in an implementation is very hard to do and costly. Starting down an implementation path without being highly familiar with all the short-cuts, dead-ends and potholes along the way is a recipe for a failed or late project.

Performance tuning is indeed another similar area where it is hard to be prescriptive – the best approach for optimizing performance depends on many factors and often requires subtle knowledge about the product stack and its internals. So clearly this kind of knowledge is super important and really, the only way to learn these best practices is by experience.

However, the good news is that while the only way to learn these lessons is from hands-on experience, it doesn’t have to be your personal experience. This is where sites like yours, Mark, come in as well as content from other A-team members, partners and customers.  In my time at Oracle, I found that the engineering and documentation team was always best at describing what something does and how. But it is much more difficult to release with the product the subtle best practices and anti-patterns that emerge from real-world implementations. So with my hat on as a former Oracle engineering product manager, I would like to thank you for the service that you provide to the community for taking the time to share these hidden secrets. And to encourage the community to also invest that extra time to share tidbits as they are uncovered. Finally, if there are things that the community thinks that could be done better, either to generate this information or share and index it better, please let us know and we’ll discuss it more widely.

Mark: Now that BPEL has matured, what about BPMN? Is it a competitor to BPEL and, if not, how do people decide when to use BPEL and when to use BPMN?

Dave: This is for sure an interesting question that has come up a lot especially since, as mentioned above, Oracle has now provided pretty much the same basic capabilities for the BPMN engine released with BPM Suite 11g, as are available in the BPEL engine. Historically, people would say that BPMN and related workflow languages, were good for human-centric processes and BPEL was good for integration or system-centric processes. However I think this was just a historical artifact of how the companies and products around these standards evolved.  For example, BPM pure-play style engines tended to be strong at the modeling and human workflow capabilities, but less scalable, less flexible and have fewer integration capabilities like adapters and data transformation built in. On the other hand the Integration / SOA engines tended to address these weaknesses well, but have fewer capabilities around business user modeling and human workflow requirements. But in reality, the underlying standards of BPMN and BPEL in this case, didn’t impose these limitations for the most part. So now that Oracle has a single engine underneath both BPEL and BPMN and shares the same integration capabilities and scalability across them, this question becomes much more relevant.

And while even though the Oracle platform is common for both BPEL and BPMN, even within that world, BPEL and BPMN become “competitors” in the sense that customers should make a decision for a given process about whether to implement it in BPEL or in BPMN. So given that I have ruled out the legacy notion that BPEL is for system-centric and BPMN for human-centric processes, I would say that the more important question is who will be modeling and looking at the processes. BPEL as a language is very developer friendly – it looks very much like Java structurally, being a block structured, flow-based language, having try-catch style exception handling and many of the same kind of coding and looping constructs as Java. But BPEL is just going to be gobbled-gook to a business analyst or business person. BPMN, on the other hand, is a much more business friendly language. Though even here, I would suggest that true business users are not going to model with BPMN very often, even they could understand a BPMN process when looking at it – something that is not true for BPEL processes.

So, the most important question is whether a process is going to be viewed, modeled, edited, etc by business people or analysts. These kinds of processes should be modeled and implemented in BPMN. However, where a process is modeled and implemented by a developer, BPEL is a great choice. And of course, the strength of the Oracle BPM platform is that these two can be mixed and matched, for example with developers implementing some low level building blocks as BPEL processes, which can then be leveraged in higher level business processes as reusable components.

Finally, there are a few practical matters to take add to the discussion above. These include the fact that BPMN is a less structured language than BPEL and is therefore better for less structured or ad-hoc processes which can be difficult to implement in BPEL. Also, while the engine underneath is the same, the Oracle BPMN tools are newer than the BPEL tools and so there will be a few features or capabilities that have not yet materialized in BPMN, but are available in BPEL. So this is yet another area where a little architectural guidance and some real-world experience can make a big difference in making projects more successful.

Mark: Based on all the customers and projects you have seen during the past 10 years, do you have any tips for new customers or those just starting on SOA initiatives?

Dave: Sure, I have certainly seen a large number of projects that were highly successful and exceeded goals and expectations and many that did not. I think one big difference between the two really does come down to this area of getting the right expertise, training and support at the right time. For example, there are some small Integration/SOA/BPM partners of Oracle who are very focused on the Oracle Integration products where you really know exactly what you will get if you engage them. There are also very effective medium and large partners, though the particular personnel who are put on a given project will vary more widely. In this respect, the Oracle Specialized program can be very useful, as it both certifies a partner as a whole to be “specialized” in a particular product, but also includes a certification test for individuals. And while this test is not a certain litmus test for whether you are getting just the right consultant for your particular needs, it certainly can be a key piece of the puzzle to ascertain whether a particular consultant being proposed for a project has knowledge of the products.

Beyond leveraging partners, having the right relationship with the Oracle field and product management team can be useful as well. Also, there is a great deal of training and documentation out there, so creating the right training plan for your organization can make a big difference.

Finally, looking less at skills and more at architecture and the product side, once thing that I have discovered in my many years working for software vendors is that customers can be unexpectedly rigid in the options they are willing to consider. In some cases, a successful project seems not to be the primary goal or the architects/developers have preconceived notions of the best way to approach a problem, and may not be open to advice from the vendor on the best approach. There is a fine line here, of course, since the vendor may have a vested interest some approaches may use more of their technology and others may use less. Clearly the vendor should not be the sole advisor in such decisions. But once the products have been purchased and particular requirements are identified, asking the vendor the best approach or architecture for the particular products and versions in play can make a big difference especially for the more complex problems. Not only is this a good way to avoid product specific issues that may make particular approaches difficult, it also increases the skin in the game for the vendor to ensure success. As a vendor, if you ask me how to approach a problem and you follow the path I suggest, I really have to make a super-human effort to support you through any problems you encounter. On the other hand, if you forsake my advice, you may be more on your own.

These are just some thoughts I have on this matter, but if readers of this post are interested in my recommendations for partners or would like my assistance in helping with a staffing or training plan, I would be happy to discuss.

Mark: Great, thanks. And finally, if people do indeed want to get in touch with you outside of Oracle, how can they do it?

Dave: Sure. Besides any business activities I have, I really hope to stay in touch with as many of the Oracle Integration partners and customers as possible. People can always email me at dave@middleworks.com, but also I welcome any LinkedIn connections and I can be followed on Twitter at @middleworks. Drop me an email if you want me to keep you updated on Middleworks’ activities or for any other reason…

Thanks, Mark, for the great opportunity to collaborate with you!

Posted in Uncategorized | Tagged , , , | Leave a comment

Worklist moving to java.net

A quick update for those who are interested in the custom BPM worklist sample.  As our previous Subversion host is not going to be able to continue to provide that service, we have moved the sample to java.net.  All download links and Subversion details have been updated in the main worklist page.

Posted in Uncategorized | Tagged | Leave a comment

Setting up Process Portal

My colleague George posted an article on configuring the BPM Process Portal on WebCenter 11.1.1.5 here.

Posted in Uncategorized | Tagged , , | Leave a comment

BPM 10g Performance Tuning Whitepaper published

My colleague Sushil Shukla and I have just published our BPM 10g Performance Tuning whitepaper on the Oracle BPM website, you can access the PDF directly here.

The whitepaper covers topics like:

  • Environment considerations
  • Application design considerations
  • Tuning the BPM Engine
  • Tuning the Directory Service
  • Tuning the PAPI Instance Cache
  • Testing
  • JVM Tuning
  • WebLogic Server Considerations
  • Database Tuning

We would like to acknowledge the assistance of many people from the BPM development team, and from a number of our customers who reviewed this document and gave valuable feedback.  Thank you all, you know who you are.

P.S.:  We have an 11g version under development for our 11g readers.

Posted in Uncategorized | Tagged , , , , | Leave a comment