Monday, September 22, 2008

Deep-dive into our OOW 2008 demo’s

Sunday Lonneke and I did the Oracle vs BEA shootout presentation. We compared products from both the Oracle Fusion Middleware stack (BPA Suite, BPEL PM and OESB) with their BEA counterparts (ALBPM, WLI and ALSB). If you attended the presentation and want some more info on the demo’s or couldn’t make it to the presentation, we’ll be at the Oracle ACE Office Hours in the OTN Lounge this wednesday from 4.00 to 5.30 pm. The demo’s include closed-loop integration between different components, creating custom adapters in Oracle ESB, performing data enrichment in ALSB, etc. etc. The OTN Lounge is a really cool place to hang out. It’s located on the 3rd floor of Moscone West. See you there!

P.S. Visit the Oracle Wiki for a complete listing of the Oracle ACE Office Hours.

Wednesday, September 10, 2008

SCA to eliminate “over-servicing”

When Oracle acquired Collaxa and its BPEL product, there was no Oracle ESB yet. All adapters -such as file, FTP, JMS and database adapters- and complementary routing and transformation functionality were directly used from BPEL processes. Later on, Oracle introduced its ESB product and advocated to place adapters in the ESB instead of BPEL PM. That made sense since it promotes a cleaner separation of concerns. After all, adapters are more a technical, infrastructural concern than an orchestration concern. Mostly due to backwards compatibility, adapter functionality remained a part of the BPEL PM product. This caused much confusion since it was not always clear from where to use these adapters.

When you follow the above design guideline of placing adapter functionality in the ESB-layer, you could end up with masses of ESB projects in a real-life SOA implementation. Think of it, there can be tens -or maybe hundreds of BPEL processes and composite services- of which several will invoke partnerlinks that are in fact ESB projects wrapping adapter functionality. Examples are retrieving data from a database, publishing an event, FTP-ing an order file to a supplier, etc. This leads to “over-servicing” since some of these ESB projects are not reusable (low-level) services, but local to a single composite service or process only.

The amount of ‘none-reusable’ services depends on the approach taken. Losing overview is one of the risks of implementing SOA in bottom-up fashion only. This can result in too many granular, and possibly none-reusable services. It can be prevented by applying a more top-down or meet-in-the-middle approach. This will lead to less none-reusable services. However, you’ll always end-up with some services that are ‘private’ or ‘local’, meaning that they are only used by one other process or composite service. In such cases you would like the ESB service to be local -or encapsulated- to the composite service or process without moving the adapter functionality to BPEL.

The great thing about the Service Component Architecture (SCA) standard -on which Oracle SOA Suite 11g is based- is that you do not need to expose low-level mediator services and adapter functionality as a separate external service. It can be a local component to a composite and therefore not visible to other service composites. This promotes much better integration and encapsulation of infrastructure and low-level services. You only need to expose those artifacts that are -or will be- reusable. This resembles some of the OO-principles such as encapsulation. However, if a project ends up with lots and lots of none-reusable or ‘private’ services, it might be a good idea to analyze the services and process in a more top-down fashion to be able to create better reusable services.

Friday, August 29, 2008

BPEL, Beehive and Service Repository at OOW

This recap of some interesting OOW2008 sessions is posted a bit later than expected since my baggage -including notes- was stuck on the airport for a few days. Coincidentally my baggage was stranded at the same airport for which I codesigned the new baggage handling system. Maybe software can have a grudge against its creator after all? Luckily it was another terminal than the one I transferred through.

There were several interesting sessions on BPEL PM by Clemens Utschig and Robin Zimmermann on the new features in, upcoming features in, and some useful tips and tricks for problem solving BPEL projects. A summary of the new and improved features in the Oracle BPEL PM patch can be found here. The main objective of this patch is to make the BPEL Console a one-stop-shop. It therefore mainly introduces administrative improvements. The most interesting of these are:

Lost BPEL instances
Actually these instances are not lost, they just don’t show up in the BPEL Console. This is due to rollbacks in asynchronous process instances that are not yet dehydrated. This can e.g. be the case when a time-out occurs and the global BPEL transaction is rolled-back. The problem is solved by using a separate transaction for dehydration and not doing the actual instance’s work in the same transaction as the dehydration.

Deployment plans
This looks very much like the deployment plans already available in Oracle ESB. These plans are used to extract most of the configurable process information that differs per environment. Such information includes URL’s and ports of invoked services, adapter-specific information like inbound file names, JNDI locations of JMS queues, database adapter names, etc. etc. Ant tasks can be used to deploy BPEL processes to a target environment with the configuration of that specific environment. This information is wrapped in a BPEL suitcase. Part of the environment-specific information -not all, especially adapter-related information- could already be externalized using customized Ant builds. When an ESB is used to wrap adapter functionality, the need for deployment plans is not as urgent.

Other improvements in the patch include improved visibility of engine threading model, improved statistics collection, minimization of XML coding errors through compliance testing and enhanced debugging of XML payloads, improved automated recovery agent (this feature was disabled in previous releases), and collection of support information when creating service requests.

Note that some of the latest MLR’s are not included in the patch. You’ll need to apply patch followed by some additional MLR’s to update to the newest version. A preview of the new features are also available in the PDF.

Some other cool stuff presented at OOW2008:

Oracle Beehive is launched. Beehive is an integrated, open, and secure collaborative platform. Sort of a new and improved OCS, but then build from scratch. It provides seamless integration with -and abstraction of- all kinds of collaborative tools and technologies such as mail, file system, content management, feeds, calendar, mobile devices, chat, protocols, etc., etc. This is done through the notion of team and personal workspaces. Beehive also includes a Web based interface. Integration with existing user-interfaces or building more advanced user-interfaces can be achieved through its Java API and/or WebCenter Suite. That way you would have a WebCenter frontend communicating with a Beehive backend. See the Beehive website and the Beehive forum.

Enterprise Repository
Some products that were lacking from the Oracle stack prior to the BEA acquisition were related to governance. In the beginning of smaller, integration-aimed, and not enterprise-wide SOA projects technology usually poses a bigger risk than governance. However, in the course of SOA-projects lack of governance quickly becomes the main risk. Next to the runtime Service Registry product from Systinet, Oracle Web Service Manager, and the Enterprise Manager SOA Management Pack that Oracle offers, governance support now also includes the former BEA product Enterprise Repository. This product supports and enables governance at design-time. With this product you can -among others- “harvest” BPEL projects to retrieve artifacts such as processes, WSDL’s, XSD’s, and so on. Enterprise Repository creates a taxonomy out of this and graphically presents this. This way one can see for example what XSD is used by what processes, what policies are attached to what processes, and if these policies are met. Later versions will automate the retrieval of runtime information to automatically determine whether policies such as service response times are met. Publishing repository information to the development environment instead of the other way around should also be possible in future releases.

And this was just a small portion of all the OOW2008 news! See OTN for more information!

Friday, August 22, 2008

The feared “demo-effect” – bluescreen just before our OOW session

Yesterday I arrived in San Francisco to attend and present on Oracle Open World 2008. After a really nice dinner with most of the ODTUG presenters, I quickly went to sleep. The day after -today as I’m writing this blog-, Lonneke and I were going to present the Oracle versus BEA shootout session in Moscone West. Exciting, moreover since the session was fully booked, with more than fifty people on the waiting list. So naturally we wanted to prepare, test, and fine-tune our demo’s this morning.

The day starts good. I had slept for more than 10 hours, my jetlag -that I really felt during the ODTUG dinner- was reduced to a minor disorientation. So far so good. I met with Lonneke to go over the demo’s. First thing that happens when I start my laptop is that I see the feared bluescreen of death telling me that a fatal core dump has occurred (sounds just like Star trek). Meanwhile the presentation was only a couple of hours away. Aarrrgghhh! To keep in Star trek terms, I only needed to realign the dilithium matrix to stabilize the warp field to fix this “coredump”. My laptop caught the demo-effect in its worst form: half our presentation is demo and my laptop is dead! That’s when my heart rate doubled and my blood pressure went to a new all-time high. Lonneke -normally making jokes to cheer you up when something goes wrong- kept quiet this time and looked at me alarmingly. Luckily, restarting computers when they’re broken or don’t do what you want can result in miracles: no bluescreen this time. Guess my computer had a jetlag too. After fixing our BEA demo’s and a quick rehearsal we left for Moscone. Luckily, the demo-virus stayed in the hotel and all demo’s worked perfectly during our presentation!

Later on this week we’ll post more on our Oracle Open World. We’ll also post the demo’s from our presentation later on.

Lonneke at OOW 2008.

Collect all the ribbons and become a fout-star general!

Thursday, August 21, 2008

Remotely invoking clustered BPEL Worklist application

Oracle BPEL PM provides a Java API -the BPEL Worklist API- to connect to its Worklist application. There are several blogs that provide sample code on how to use it.

In production-like environments, components invoking the Worklist application can run on different servers for reasons of scalability and failover. This is also the case in one of our projects, where we have a SOA Suite cluster running ESB and BPEL, and another cluster running the ADF and WebCenter front-end. A custom front-end component presents task-related information to the users through a portlet. Tasks are queried and refreshed frequently and a user can have access up to several thousand tasks -not only user-assigned tasks are displayed but also tasks assigned to groups they belong to.

For remotely connecting to the BPEL Worklist application from the front-end components we considered the SOAP and RMI clients that are provided with the Worklist application. The type of client can be specified in the Java code accessing the Worklist application:




Note that when using a SOAP client you have to add the statement Predicate.enableXMLSerialization(true) to make sure that predicate information for querying tasks is marshalled and sent to the Worklist application.

The actual configuration containing the information for the client on the Worklist web service endpoints and Worklist Java application is in the wf_client_config.xml file. That file is located in %ORACLE_HOME%/bpel/system/services/config.

One of the advantages of the SOAP client is that we can use WebCache as software load-balancer for HTTP requests. WebCache is already used in the project to load-balance and failover HTTP calls to external web services, ESB services, BPEL composite services and processes, etc. During stress-testing we found out that using a SOAP client -instead of RMI- roughly results in a 1.5 seconds response time penalty when querying 1500 tasks. That does not include other operations such as rendering tasks. The delay is most likely caused by XML marshalling and de-marshalling. RMI serializes object variables, but does not use the verbose XML format nor does it have to build a DOM tree in memory.

One of the questions that arises is whether failover is achievable using an RMI client. We don’t want a front-end component to be “tied” to only one SOA Suite runtime instance. It turns out this is possible by correctly configuring the wf_client_config.xml file. The serverUrl element in the configuration file defines the location of the Worklist application. You can use a comma-separated list of locations like you do when building an RMI initialcontext: java.naming.provider.url = "server1,server2".
The following piece of code from the wf_client_config.xml file achieves failover when using an RMI client to a clustered Worklist application:

opmn:ormi://server01:6003:oc4j_soa/hw_services, opmn:ormi://server02:6003:oc4j_soa/hw_services

For reasons of standardization a SOAP client is preferable. However low response times and other performance issues are important for good user-interaction. An additional delay of 1.5 seconds every time a user retrieves his or her tasks isn’t acceptable. Instead of using RMI -that is a less standardized protocol- to improve response times, you can also use the Worklist API to not query the entire collection of tasks, but only a certain range. This can then be used in combination with a SOAP client while maintaining acceptable reponse-times. The front-end component does need some modification though to handle navigation and iteration through these task subsets.

Monday, May 19, 2008

Experiences with Vista and Oracle software (2)

A few months ago I wrote this blog about installing (Oracle) software on my new Vista laptop. I already installed these components several times on Windows XP, so I thought it would be a walk in the park. In the end, it took me a couple of days instead of hours … :-(
There are lots of threads on the OTN forums -like this one- dealing with Vista. It turned out I wasn’t the only one struggling with it.

This blog is about installing Oracle SOA Suite and patch on Vista. Common errors thrown at me -some gave me nightmares :-) - were:

  • OWSM configuration assitant fails.
  • Error: Missing ormi[s]://host:port.
  • This OC4J is configured to make JMX connections via RMIS and fall back to RMI if the RMIS port is not configured. The RMIS connection on the OC4J instance null on Application Server null is configured but a connection could not be established. The JMX protocol is specified by the oracle.oc4j.jmx.internal.connection.protocol property in opmn.xml.

I did three “batches” of steps to get it running:

  • Perform the pre-installation steps as documented in the Oracle SOA Suite and installation guides. These contain the usual steps like configuring a loopback adapter, not using whitespace characters in directory names, having the right JDK, etc. These guides are bundled with the SOA Suite installation files when downloaded from OTN. Performing these steps is “business as usual”. However, these steps alone didn’t work.
  • Perform some steps as documented in Metalink note 444112.1.
  • Browse through the OTN forums for the remaining errors and stacktraces.

These last two “batches” consist of the following additional actions:

  • Do not use underscores in your computer name. I had an underscore in it -love them since you can’t safely use whitespaces :-) - and I couldn’t run SOA Suite. See this forum post.
  • IPV6 is not supported with Vista. There are some compatibility issues between Sun’s JDK and the new IPv6 protocol. I explicitly disabled IPv6 by removing “::1 localhost” from my hosts file, disabling Internet Protocol version 6 (TCP/IPv6) for all my network adapters in the Windows network configuration, and added the following registry value (DWORD type) set to 0xFF:
    HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet, Services, Tcpip6, Parameters, DisabledComponents. This is documented in the Metalink note. You can also add the Java option “” to use IPv4 instead of IPv6.
  • Install, start, and stop Oracle SOA Suite as administrator (e.g. right-click and choose “Run as administrator”). Somehow this causes the relative paths in the Start Menu shortcuts to not resolve to the correct locations anymore. I changed these locations to absolute file paths. To do this, edit the [SOA Suite home]/bin/runstartupconsole.bat file and change the line containing “set ORACLE_HOME”.
  • Firewalls and virus scanners are great when they are not hogging your system and/or block wanted messages. The latter happened on my laptop. I disabled the pre-installed security software since it blocked certain requests and replies from and to my local Oracle SOA Suite.
  • I did an advanced install and used Oracle 10g XE as dehydration store since the pre-installed Oracle Lite database (Oracle SOA Suite basic install) crashed every now and then.

These errors can be really frustrating. Especially when you successfully install the exact same software on another OS within the hour and all your “Linux” and “XP” colleagues have Hello World running on Oracle SOA Suite :-) Hope this helps.

Saturday, May 17, 2008

Vote for sessions at Oracle OpenWorld 2008 (2)

Lonneke Dikmans already blogged about the possibility to submit and nominate sessions for Oracle OpenWorld 2008. This is one of the ways to get involved in the Oracle community through Oracle Mix.

I submitted an idea called "Putting SOA to Use". This session is based on experiences from a customer-case in which a Service-Oriented Architecture is implemented using Oracle technology such as BPEL, ESB and WebCenter. Read it and if you think it’s interesting vote for it!

Thursday, March 13, 2008

Master/detail inserts and native sequencing in Oracle ESB

The SOA Suite forum on Oracle Technology Network (OTN) contains several posts on the use and configuration of the Database Adapter. This adapter can be used to perform database-related operations such as inserts, polling, invocation of PL/SQL, and so on in your SOA-environment. Some of these forum questions relate to inserting XML data into master/detail tables and how to use native database sequences to populate primary keys and automatically update associated foreign keys. I wrote an article Invoice Processing in a Service-Oriented Environment that contains a step by step tutorial on how to achieve this.

Monday, February 11, 2008

Database adapters, TopLink and Transformations

In one of our current customer SOA projects we’re using Oracle Enterprise Service Bus (ESB) to implement and expose services. One of these services involves transformation of inbound data and persisting this data in a database. We’re using XSL to transform the inbound data and Oracle Application Server’s database adapter to persist data into a relational database. JDeveloper, Oracle’s IDE, provides wizards to configure database adapters. TopLink (an object-relational mapping framework) mappings are generated as a result of this configuration. These mappings are XML files containing metadata such as the structure and format of the database tables. In our ESB flow data is transformed into nested XML format, which is persisted in multiple master-detail tables.

This week -after a modification to the database adapter configuration- data was still persisted, but lots of database records that were previously populated, were suddenly empty. The ESB console indicated all instances were valid and debugging the ESB flows showed that the input XML consumed by the database adapter was unchanged. After some more investigation it turned out that the order of tables in some of the generated TopLink mapping files (OurService_table.xsd, OurService_toplink_mappings.xml and OurService.RootTable.ClassDescriptor.xml) had altered; e.g. instead of table 1, table 2, table 3, the order of tables was table 1, table 3, table 2. The transformation activity still generated input XML data as table 1, table 2, table 3. Synching the order of tables in the transformation and mappings files resolved this issue. This means that runtime, the database adapter strictly follows the order of tables indicated in the TopLink mappings. You can argue whether this is too strict or not, especially since the same tools are very “tolerant” of other faults, omissions, validation, etc. Well, at least you would like to see some kind of error when the XML input format does not conform to the TopLink mapping definition.

Anyway, this was a tricky issue since the input XML seemed valid, the ESB console indicated all instances were valid, no error was logged and some data (but not all!) was persisted in the database.

Friday, February 8, 2008

Events and SOA

In my opinion events are just as important in a SOA as services. I think it’s not possible to achieve a “real” SOA without addressing events separately, just as we do for services. It can be somewhat confusing that events have there own acronym emphasizing their importance, namely Event Driven Architecture (EDA). Some coin this as the upgraded version of SOA, or SOA 2. How we love 3 (or 4 or 5) letter acronyms :-) Most of these different acronyms only add to the confusion, so I prefer sticking to SOA and emphasize events as integral part of it.

So why are events important?
A service-oriented approach should result in business-value (ROI), otherwise SOA only introduces technical complexity through additional middleware. One of the ways to achieve this is through business-IT alignment. Events play a major role in both of these worlds (business and IT).

Businesses deal with (and are causing) events all the time: a customer moving to a different address, a new purchase order, receiving an invoice from a supplier, sending a bill to a partner, and so on. Entities in a business’ ecosystem such as employees, partners, suppliers and customers all react to these events. They initiate new processes, perform activities, propagate events, and so on. It is therefore logical to incorporate events when modeling business processes, which are at the heart of SOA. Just as in all modellng practices, one wants to model and focus on important aspects. In case of SOA this includes processes, services and events.

From a technical point of view events can achieve asynchronicity and de-coupling. By using publish/subscribe and queueing mechanisms, software components are not required to know of each others existence. They simply subscribe to a topic and act based on received (and subscribed to) events. The other way around, if components have some (intermediate) result or state, they can share this with the rest of the world by publishing an event and then forget about it. These components don’t need to know what other components are interested in this information. Of course you’ll need some glue (i.e. middleware) to implement this.

So what does this mean concretely in a SOA-project? For one thing, besides trying to identify what types of services you have (business services, composite technical services, etc.), also investigate what types of events can be identified. There should be an event-registry besides a service-registry. This does not need to be a full-blown tool at first but can simply be documented using Excel in the beginning. Also define the relationship between processes, services and events. What services publish what type of events? What processes are initiated by what events? What running processes are influenced by what events?

Wednesday, January 30, 2008

Experiences with Vista and Oracle software

This is a quick overview of my attempts to install and run (Oracle) software on Microsoft Vista. A few months ago I got this new Dell laptop pre-installed with Vista Business. Last couple of years I mostly worked with 2000, XP and Unix/Linux so Vista was getting used to in the beginning; especially the enormous amounts of “Are you sure”, “Do you really want this?”, “You’re not allowed to do this”, etc. pop-ups (even more than XP had). Although I must admit that some of the new UI-stuff is real eye-candy.

Some general (and sometimes really frustrating) issues I ran into when trying to get Vista up and running was the new IPv6 network protocol and updated WPA stuff (my existing router and wifi configuration really had a hard time connecting to my laptop or vice versa – I’m not a network guru, maybe that’s the problem after all :-), the new user account control (UAC) policy in Vista (I couldn’t stop some Windows services in the beginning even when logged on as admin) and the pre-installed firewall, virus scanner, etc. which made my (finally up-and-running) Internet connection and network adapter crash. So after some reboots and resetting my laptop to factory settings I thought I had the new Vista “under control”.

Time to install some real business software.

Up to now I have the following software up and running (didn’t really test all software extensively yet):

  • Sparx Systems Enterprise Architect
  • JDK 5 and 6
  • Eclipse SDK
  • Eclipse WTP
  • Oracle BPA Suite
  • JDeveloper 10g
  • JDeveloper 11g – Technology Preview including SOA Suite and WebCenter
  • Oracle XE 10g
  • Oracle SQL Developer
  • Oracle OC4J 10g

Some tips:

  • Don’t use the standard Vista (un)zipper when unzipping software with long directory- and filenames. For example, I couldn’t run Eclipse WTP since the Vista unzipper didn’t extract all the extension files. This can be tricky since some installers won’t even indicate that all files are not properly unzipped. You can start Eclipse WTP, develop some code, but when trying to add a server, there’s nothing there to add?! Use another tool such as 7-Zip.
  • Run all installations and configurations as administrator; right-click and choose “Run as administrator”.
  • I ran into an error when installing a standalone OC4J. This was caused by compatibility issues between Sun’s JDK and the new IPv6 protocol. Trying to shut down OC4J returns an error indicating “Error: Missing ormi[s]://:”. See this forum post for a solution. I simply added the
    parameter to the OC4J startup command in oc4j.cmd.
  • Read the installation manual. Ok, I know this sounds a bit daft, but there are numerous forum questions in which it turns out people didn’t configure a loopback adapter, didn’t have the right JDK installed, etc. So after all, it’s really worth reading the manual.
  • Make sure the directory structure for your Java and Oracle installations don’t contain whitespaces or symbols such as # or @. I only use letters, digits, dots and underscores. Not all software can handle other characters.
Ok, in the next couple of days/weeks I’m going to (or at least try to :-) install SOA Suite 10g (lot of forum posts on installing this on Vista), Oracle Database 10g Enterprise Edition and playing around with Coherence.