Wednesday, November 4, 2009

Oracle Service Bus article on OTN

The Oracle Service Bus article Eric Elzinga and I wrote is published on Oracle Technology Network (OTN).

The article is aimed at developers and architects who are familiar with Oracle Enterprise Service Bus (OESB) and are (fairly) new to Oracle Service Bus (OSB). The tutorials in this article highlight differences between these two products. The tutorials are based on a workshop in the WAAI community; a collaboration of Dutch consultancies (Whitehorses, Approach, AMIS, and IT-Eye). The goal of the WAAI collaboration is to share, bundle, and expand knowledge on the recent Fusion Middleware 11g release.

Monday, November 2, 2009

Governing events and architect anti-patterns

As the name suggests, SOA is all about services. What about events? In the past, several SOA-efforts tended to neglect events; ultimately causing SOA not to deliver on its full potential or fail altogether. So SOA-practitioners evangelized the use of events. And of course we as IT-industry came up with new terminology to emphasize this: EDA, SOA 2.0, and event-driven SOA to name a few.

This blog is not about promoting events since its importance is (hopefully!) recognized and events are mainstream in nowadays SOA-initiatives. If not, I encourage you to read this blog that explains why events are important from both business and technical perspective. There can be no real SOA without events. Events are just as important as services!

So everything hunky-dory, right? Then why are some SOA-projects using events at runtime to model business processes and their interactions and enable loose-coupling, but neglect to address the governance aspect?

  • Organizations set up SOA-registries that include and publish services but not events. Service consumers can discover services, reuse them, retrieve metadata such as ownership, contract, interface, and so on. What about event consumers? What about including events in your registry?
  • Architects design taxonomies that structure services into various layers (business services, composite services, and elementary services) and domains (finance, CRM, sales, etc.) but have no taxonomy for events.

Bottom-line: not only use events at runtime, but make events an integral part of your governance processes just as you do for services and processes. That enables reuse of events, dynamic event-discovery, lifecycle-management of events, and so on.

What I’m wondering though if there is a ‘one-size-fits-all’ solution when it comes to governance of services and events? Does the same taxonomy apply for services and events? Is the lifecycle for services the same as for events? Is the metadata we need and store for effective governance the same for events and services? Do you want to unify governance for services and events?

Some experiences might suggest so. We could structure events into business events, composite events, and elementary events. An event has a contract, interface, and implementation. An event has event producers and event consumers. An event has an owner. An event can be discovered. An event provider can guarantee message delivery. An event can be under development, in production, deprecated, retired, and so on. Replace event with service in these last few sentences and it all seems to fit.

However, I don’t want to rush to conclusions and try to squeeze everything into one all-knowing overall model. Guess that’s a known architect anti-pattern: everything has to fit the boxes we draw and models we think of. Even if reality fails to fit in. We rather try to alter reality then to change our models :-)

Obvious differences would be that the consumers of services are generally known whereas event consumers could be unknown (hence also better decoupling). This has different consequences for services and events when it comes to dependency management and impact analysis. Also, events and services could have some specific attributes such as consumer type for events: single (queue) versus multiple (topic).

In any case, I’m going to find out! For a new customer project I’ll be defining the business, information, and technical architecture around services and service-registries and define their governance processes. And guess what? We’re going to include events in this effort. Let’s see what the result will be.

Sunday, October 25, 2009

Presentations Oracle OpenWorld 2009

Oracle OpenWorld and Oracle Develop 2009: It’s a Wrap! Just like last year an awesome event! Read about some of the highlights and experiences in this previous blog.

Lonneke Dikmans and I presented the following two sessions on Oracle OpenWorld 2009 that can be viewed here:

Approach to Oracle Fusion Middleware 11g
This session presents an approach to the strategic Oracle Fusion Middleware 11g components, using a customer case and in-depth knowledge of the new Oracle SOA Suite 11g. The case study covers a car leasing firm that migrated from Oracle SOA Suite 10g and Oracle WebCenter 10g to Oracle’s strategic platform with Oracle WebLogic solutions and Oracle Application Development Framework 11g.


  • Overview of the customer’s SOA environment and infrastructure
  • Migrating to Oracle WebLogic solutions and Oracle Application Development Framework 11g and how a SOA environment affects the transition
  • New features of Oracle SOA Suite 11g and how to migrate to it, with a focus on Oracle Service Bus and Service Component Architecture

Portals: The Way to Realize User Experience in a Service-Oriented Architecture? (IOUG/ODTUG)
Portals seem like a natural fit for realizing the front end in a SOA. This session describes two customer cases in which portals were used to present services to end users. In the first case, a Dutch municipality used Oracle Portal in conjunction with Oracle SOA Suite to offer personalized information and products and services to citizens. In the second case, a car leasing company used Oracle WebCenter as a process portal for users for part of its procurement process. In both cases, the portal did not offer the expected benefits to the organization or the end users. The presentation covers possible use cases for the application of portal technology and the critical success factors for portals in SOA and BPM environments.

Tuesday, October 20, 2009

Best practices 4 – Security and Identity Management

This is the fourth blog in a series of BPM and SOA best-practices. The previous blog in this series was on Oracle ESB and Mediator. This blog will discuss security and identity management in an SOA-environment.

So what exactly is it?
IT-security has become more and more important over the last decades. While at first security was frequently treated as a necessary evil, nowadays it has matured into a separate area of expertise. There can be a lot of confusion about what exactly encompasses security and identity management; everyone has a different view on it. When discussing these topics first agree upon its scope before you delve into it. In this blog it will be divided into:

  • Identity management
  • Authentication; including Single Sign-On (SSO)
  • Authorization
  • Logging and monitoring
  • “Hard” security; more technical security including confidentiality and integrity of data, usage of firewalls, IDS/IPS products, and so on.

The first three together encompass Identity and Access Management (IAM). “Soft” security like creating security awareness, training employees, applying physical security to buildings and IT-assets, and availability of IT-assets (together with confidentiality and integrity forming the so-called CIA-triad) are out of scope for this blog.

Security and SOA
When compared to traditional software development the important question is not whether security in an SOA-environment is important, but if it is any different -and should therefore be designed differently? The answer to both questions is yes.

To understand why security should be handled differently we need to understand the characteristics of SOA that are key to the security aspect compared to those of traditional software development:

  • Next to human-machine interaction there is more machine-machine interaction. This means there is a greater need for automated security mechanisms for purposes of authentication, authorisation, encryption, and so on.
  • A SOA-environment generally contains more intermediary stations such as ESB’s and other middleware components. There are more locations for users and administrators to view -possibly confidential- message contents such as credit card information. In this case transport security alone is not enough.
  • How can you manage and control various (external) clients that want to access data and/or services if systems are loosely-coupled? E.g. not every client must be allowed to invoke a banking service.
  • SOA results in more Straight-Through-Processing (STP) meaning processes are more frequently executed in an entirely automated fashion without human interference. Good security is key since consequences of possible security breaches could be detected later on. Also, the consequences can be graver due to the possible large amount of process instances.
  • Services are invoked by both internal and external consumers. A service’s security level is usually determined by its owner. In case of external services, security will be largely determined and enforced outside an organization’s own span-of-control. The level of security determines the consumers trust: “What happens with my data if a service is not secured?”, “Can I trust a service’s result?”, and so on.

These differences clearly impact the way security should be designed within an SOA-environment. It furthermore warrants the need for an integrated and holistic approach on security in an SOA-environment. Use a layered approach to security as for example promoted by the defense-in-depth strategy.

Externalize security
For a number of reasons it is a good design-principle to externalize identity management and security; even more so in an SOA-environment that frequently consists of heterogeneous infrastructure. Every service having its own IAM and security design and implementation leads to a suboptimal solution, more overhead, and greater chance for security breaches. If security is part of the infrastructure’s components -for instance intertwined in an ESB product- different products will most likely also support different security standards and protocols. E.g. an application server might support SAML version 1.1, the WS-Security Username Token profile, transport security using HTTPS, and LDAPS while the ESB product supports SAML version 2.1, the WS-Security X.509 Token Profile, message security using XML DSIG, and LDAP rather than LDAPS. This is worsened in case external infrastructure supports yet another subset of standards and protocols. This can cause poor interoperability. Use a separate -specialized- component for security instead. This promotes both reuse of better security throughout your SOA-environment and promotes separation of concerns.

The agents and gateway patterns are very well suited to externalize security. Use gateways for appliance of common security policies and agents for more service-specific security policies.

Security classification
Define a limited set of security classifications; for example based on the CIA-triad (confidentially, integrity, and availability) ranging from e.g. “public” to “highly classified”. Determine a minimum set of security measures per classification level. For each new service determine its classification levels; this is usually the responsibility of the service owner. Make classification levels part of your service repository and governance processes. This results in more understandable security regulations, gives better insight in the current and future security of your environment, better reuse of existing security policies, and prevents reinventing the wheel when establishing security for new services. Most important it results in just the right amount of security to be applied; thereby saving money (strive for the lowest possible classification levels without endangering security) while applying (just) enough security.

Transport versus message security
There are roughly two types of security for message invocation: transport and message security. Transport security secures a message only during transport between service consumer and service provider using the transport layer; e.g. using HTTP over SSL or TLS (HTTPS). That means messages are not protected in intermediary components such as an ESB and not protected directly after being received by the endpoint. Message security secures the message itself, mainly through encryption of the payload using for example public and private keys. Since message security can provide security in the scope you want to -so also in intermediaries and after the message has been received- it is generally preferable over transport security. Both transport and message security can be used for authentication (e.g. signature based on certificates), integrity (e.g. digest), and confidentiality (e.g. encryption).

Maybe trivial but very much important: use standards to promote interoperability. This includes the usage of security standards such as LDAP(S), HTTPS, SAML, XML DSIG, WS-Security (WSS), and other WS-* standards. Using standards results in secured services being reused by (both internal and external) heterogeneous infrastructures. Next to technical standards there are also a number of security reference architectures and principles and guidelines you can leverage.

Before we wrap up some best-practices per area.

Identity management
Use a centralized identity management repository. This avoids duplicate user management and possible inconsistencies. Divide users into different identity types if needed -such as employees, customers, suppliers, and so on since different rules and administration may apply to each category. Be careful in allowing external IT-assets and organizations direct access to your identity management solution. Consider identity provisioning in such cases as external hosting to minimize security risks.

Usually you want a service provider to authenticate the original service consumer (user identity) and not some intermediary component such as an ESB. Implement identity propagation of tokens, username/password, etc. so the service provider authenticates and authorizes the identity of the original user that invoked the service. That implies that all intermediary components between service consumer and provider need to be able to transport identity tokens -and possibly transform these from one format to another (e.g. from SSO token into username/password).

Especially in case of authenticating and authorizing external organizations consider the trade off between using specific identities (Mr. X or Mrs. Y) versus more general identities (organization Z). Specific identities result in better traceability and can provide for more fine-grained access control while more general identities can result in less administration: the number of different identities to manage and synchronize decreases dramatically.

Avoid generic identities such as “consultant” and “trainee” all together.

Define a limited set of authentication levels and differentiate on information (password), possession (token, physical key, text message to a phone), and attribute (voice, fingerprint) as mechanisms. E.g. “basic-level” authentication requiring information, “middle-level” authentication requiring information and possession, and “high-level” authentication requiring attribute or possession together with a check of ownership.

Most organizations promote SSO to improve user-friendliness and provide for better user-experience. Determine however if you want SSO for your most classified IT-assets. SSO can provide access to a multitude of IT-assets due to a security breach in only one of the IT-assets. A best-practice is to grant access to IT-assets based on authentication level; if you authenticated using basic authentication, SSO will only grant you access to IT-assets requiring the same or a lower authentication level; not to IT-assets requiring “high-level” authentication.

The SSO-provider needs to be verified and trusted before you can hand over authentication to that provider.

Don’t tie rights to IT-assets directly to user identities to avoid high maintenance costs, inflexibility, and lock-in of users. A good design-principle is to use a form of Role-Based Access Control (RBAC) to decouple authorization. Use attributes such as organizational units and/or job titles that do not change frequently over time as intermediary layers in the authorization model. Assign rights in IT-assets to entities in this layer (e.g. organization unit and/or job title) and vice versa assign user identities to these intermediary layer(s). Design the authorization model per identity type (customer, employee, supplier, etc.).

Base authorization on the work/function someone or some organization needs to do; no more, no less. Avoid “super-users”; usually management and/or IT-staff that have gathered much more privileges over time than they’re entitled to. Increase security by assigning more than one role to the various steps in sensitive processes thereby preventing one user to be able to execute the process entirely.

Logging and monitoring
Functionality and processes in an SOA are spread over different loosely-coupled components. Some logging and monitoring needs to be executed on a higher level than on that of an elementary service; but rather on composite service or process level. This gives rise to the need for a central logging and monitoring component that is able to combine and correlate decentral logs and enables monitoring on process-level. The Wire Tap pattern can be used to publish logs, sensors, and other types of messages from services and middleware to the central monitoring component. Notifications can be managed and implemented separately of the logging and notifications can be published by this central monitoring component. Note that this requires synchronization of date and times of the several components that are managed to enable correct correlation. Determine for every service if it is allowed to continue operation in case the central monitoring component fails. Is it e.g. allowed from a security point-of-view to use decentral -localized- logging and monitoring in case the central monitoring component is down?

“Hard” security
A best-practice is to divide security in a number of layers. Chart possible vulnerabilities, threats, and corresponding principles and guidelines to counteract them. This approach results in a more effective and efficient security. Examples of such layers are: network security, platform security, application security, integrity & confidentiality, content security, and mobile security. Examples of principles and guidelines are applying compartioning (network security), to have a central list of allowed and non-allowed file extensions for inbound and outbound traffic (content security), and the use of hardening (platform and application security).

Oracle’s direction
In case of Oracle’s SOA product stack (SOA Suite 11g) security is externalized from almost all products and can be applied using policies. These policies can be configured in a management console and reused by processes and services that are packaged and deployed as SCA composites and components. These policies are based on standards such as WS-Security. Oracle Service Bus (OSB) contains security functionality though. As stated in OSB’s SOD: “The ability to attach, detach, author and monitor policies in a central fashion will be extended to the Oracle Service Bus (as it is has been extended to all other components in the SOA Suite 11g).” In any case you can already secure OSB projects using OWSM.

Sunday, October 11, 2009

Some tips and tricks on migrating SOA Suite 10g to 11g

Just a few things I noticed last week when migrating BPEL and ESB projects from SOA Suite 10g to SCA composites and components in SOA Suite 11g.

Custom XSLT functions
Just like in SOA Suite 10g you can expose Java methods as custom XSLT functions and use them at designtime in the XSLT Mapper of JDeveloper 11g. An example is a custom XSLT function transforming a local bank account number into its corresponding IBAN format. While the mechanism to expose custom functions is the same as in SOA Suite 10g, the exact implementation in SOA Suite 11g is little bit different. Custom XSLT functions are packaged in a JAR file including an extension XML file describing the functions. By specifying and adding the JAR file in JDeveloper you can use these functions designtime. You then place the JAR file on the application server running SOA Suite so they can also be executed at runtime. See this blog by Peter O’Brien for more detailed steps for using custom XSLT functions in SOA Suite 10g. Migrating custom XSLT functions from 10g to 11g needs to be done manually and involves the following steps.

  • Edit the extension XML file and replace the elements and namespaces according to the new XSD describing custom XSLT functions.
  • Rebuild the extension JAR and add it to JDeveloper 11g using Tools –> Preferences –> SOA from the menu. Restart JDeveloper. Inspect the log window to see if JDeveloper correctly parsed the extension JAR file. There will be an error or warning in case of an incorrect configuration.
  • The custom XSLT functions are now listed in the XSLT Mapper.
  • Place the JAR file in the BEA_Home/user_projects/domains/domain_name/lib directory to make them available at runtime.

See OTN for more detailed information.

Sensors and tracking composite instances
There are a few ways of tracking composite instances and/or relating these to business entities such as orders or invoices that are processed by the service. End-users frequently want to know -given a particular business entity- what process instance(s) are related to it. In SOA Suite 10g -and more particularly BPEL PM- you could use the setIndex function within an embedded Java activity or use sensors to publish this information. You would of course need a subscriber to process these sensor values and store the relation between process instances and business entities somewhere.

In SOA Suite 11g you have this great new feature of using composite sensors to achieve this. See for example this blog by Demed L’Her. Another way is to set the name of a SCA composite instance. By default the instance name is not set and the corresponding name column in the EM 11g Fusion Middleware Control is empty. You can set the composite instance name at designtime from a Mediator component or BPEL component using the setCompositeInstanceTitle XPath function or equally named Java extension. Just like composite sensors you can then search for composite instances based on its name. This is documented in Oracle’s Fusion Middleware Administrator’s Guide for Oracle SOA Suite.

Note that EM 11g Fusion Middleware Control only shows sensor actions and values that are stored in the database. As stated in the Developer’s Guide: “If you created JMS sensors in your BPEL process, JMS sensor values do not display in Oracle Enterprise Manager Fusion Middleware Control Console. Only sensor values in which the sensor action is to store the values in the database appear (for example, database sensor values).”

Domain Value Maps
Another great improvement in SOA Suite 11g is that Domain Value Maps (DVM) are now available to all components in SCA and not only limited to ESB (now Mediator) as is the case for SOA Suite 10g. You store DVM’s locally in your project or use MDS for this in SOA Suite 11g. Using DVM’s from XSLT transformations has slightly changed. More particularly the namespace and XSLT function name. If you automatically migrate SOA Suite 10g projects into SCA components and composites by using the migration tool or reopening the projects in JDeveloper 11g this will be handled automatically. However, if you rebuild SCA composites manually using the artifacts from your previous SOA Suite 10g project you have to take this into account and change the namespace and XSLT function name yourself.

Sunday, September 6, 2009

Migrating Web Services from JDeveloper 10g to 11g

Although most of the migration steps from JDeveloper 10g/OC4J to JDeveloper 11g/WebLogic are automated, there are some exceptions. One such case where you have to roll up your sleeves and do some coding are EJB 3 Session Beans that are exposed as Web Services using JAX-WS annotations. JDeveloper 10g generates a separate Java interface containing JAX-WS Web Service annotations when using the EJB 3 Session Bean Wizard and selecting the option to create a Web Service interface. Note that this option isn’t available in JDeveloper 11g, but you can right-click an EJB Session Bean and select the generate Web Service option that will give you the same result.

When migrating the JDeveloper 10g workspace to a JDeveloper 11g application -by opening the jws file in JDeveloper 11g- most of the migration work is automatically done; for example the workspace and project files are updated and existing deployment plans are converted.

If you then deploy the project to the integrated WebLogic server everything seems to deploy and run just fine. However if you expand the deployment in the WebLogic Server Administration Console you’ll see that there are no web services listed, only EJBs.

Here are some simple steps to correct this:

  1. Remove the Java interface containing the JAX-WS Web Service annotations that was generated in JDeveloper 10g and remove the interface from the implements statement in the EJB Session Bean class.
  2. Add a @WebService annotation to the EJB 3 Session Bean containing the following arguments: name, serviceName, and portName. Check the WSDL of the current deployed Web Service generated with JDeveloper 10g to obtain its metadata such as name, namespace, and portname. These values can be used in the new @WebService annotations of the migrated Web Service in JDevloper 11g so that Web Service clients don’t break due to different namespaces, portnames, endpoints, and so on. You can also use other annotations to influence the endpoint and WSDL of the Web Service. However mind that some annotations are WebLogic-specific and not part of the JAX-WS standard.
  3. Optionally add other JAX-WS annotations as needed.
  4. Replace the JAX-RPC project libraries with the JAX-WS Web Services library.
  5. The current WebLogic JAX-WS stack -more specific the JAXB implementation- does not support java.util.Map and java.util.Collection family types as Web Service method return or input types. Deployment fails with the message “java.util.Map is an interface, and JAXB can’t handle interfaces” and “java.util.Map does not have a no-arg default constructor”. A logical workaround would be to replace these types with concrete implementations that have a no-argument constructor; for example java.util.HashMap. Although deployment then succeeds, the information contained in the map is lost at runtime when requests/responses are (un)marshalled. A final workaround was to replace the java.util.Map with a two-dimensional array. Although I’m not really happy with this workaround, it works for now.

Deploy the project and voila, the WebLogic Server Administration Console shows both EJBs and Web Services.

So “no coding required”, or just a little bit perhaps :-) ?

P.S. Some useful links:

Friday, July 31, 2009

Exception-handling in JAX-WS Web Services on WebLogic

There is more to exception-handling in JAX-WS Web Services than meets the eye. Especially when throwing custom (checked) exceptions from your Java methods that are exposed as Web Service operations. There’s a nice blog by Eben Hewitt on using SOAP Faults and Exceptions in Java JAX-WS Web Services. I recommend reading it; especially when you get the following error: java.lang.NoSuchMethodException. This is one of the issues you might run into when migrating from Oracle Application Server (OC4J) to Oracle WebLogic Server.

Wednesday, July 29, 2009

Best practices 3 – Oracle ESB and Mediator

This is the third post in our SOA and BPM best practices series. This blog provides best practices for Oracle ESB (Oracle Fusion Middleware 10g) and its successor (when it concerns routing and transformation): the mediator component in SCA (Oracle Fusion Middleware 11g). The previous blog in this series is about Web Services best practices.

Use a bus. Maybe a bit of an open door, but there are still projects that stall, exceed budget, or fail all together since no ESB is used and “SOA-plumbing” is implemented (or tried at least) in an orchestration tool, custom logic, and so on. Use an ESB for decoupling, virtualization, abstraction, transformation (data as well as protocols), and content-based routing. Decouple this type of functionality from your orchestration and workflow.

Migrating from OFM 10g to OFM 11g.

  • If you don’t migrate to SCA and have used Oracle ESB as a stand-alone ESB then migrate to OSB. This will require reimplementation of OESB flows as OSB flows.

If you migrate to SCA:

  • For non-reusable ESB flows that perform “internal” transformation and routing functionality within the SCA runtime: create a mediator component that is not directly exposed in its containing SCA composite and add your other components that use the mediator -such as BPEL components- to that composite. Open the OESB project in JDev 11g to create an initial composite.
  • For reusable ESB flows that perform “internal” transformation and routing functionality within the SCA runtime: create a composite containing only one mediator component that is exposed using a service. Other SCA composites can reuse this “mediator” composite. Open the OESB project in JDev 11g to create an initial composite.
  • For ESB flows that interact with the “outside world”; in other words connect the SCA runtime to other runtimes and/or external parties such as suppliers and clients: migrate to OSB.

Encapsulation and exposing operations. As with Web Services in general, do not expose all routing service operations and adapter operations. This promotes encapsulation; only expose what is or will be reusable. Also see this post about improved encapsulation in OFM 11g. In 10g, you cannot “hide” an ESB flow but you can minimize the operations that are invocable by disabling the option “Invocable from an external service”. In 11g, you can hide a mediator within its composite by not directly exposing it by making sure there’s no direct service and wire to it. This is achieved by disabling the “Create Composite Service with SOAP Bindings” option when creating a mediator component.

Data enrichment. Although data enrichment typically is something you would do in an ESB -for example when implementing VETO (validate, enrich, transform, and operate)- don’t use Oracle ESB for it. Through the lack of temporary variables it is not well suited for data enrichment when data comes from different sources. You can use the $ESBREQUEST variable to ameliorate this, but still this is not a great workaround. Use BPEL PM or OSB in 10g for complex data enrichments and OSB or SCA composites containing multiple mediator and/or BPEL components to achieve complex data enrichment.

XML. Create a public_html folder in every ESB project created with JDeveloper 10g and place non-generated XML artifacts such as XSLTs and XSDs in it. Leave generated XML artifacts such as TopLink descriptors from the DB Adapter in the (default) root folder. When editing mediators in 11g XSLT will automatically be created in an xsl directory and XSDs will be placed in a xsd directory.

Deployment. Use Oracle ESB Ant scripts to deploy to test, acceptance, and production environments. Use deployment plans to configure endpoint and adapter settings per environment (DTAP). Make sure you don’t mix Ant and JDeveloper deployment since it can cause problems in your ESB runtime. For SCA composites use configuration plans.

Structuring. Use ESB Systems and Service Groups in 10g to structure ESB flows. A possibility would be to use an ESB Systems per business domain and an ESB Service Group per project. For example: ESB System “Finance” that contains ESB Service Group “FIN_ESB_Process_Invoice”.

XSLT extension functions. Custom XSLT functions can be a powerful mechanism to implement your own transformation logic but it can also break portability when moving from one environment to the other due to the required configuration and deployment steps. The creation of user-defined extension functions in OFM 11g is different from 10g. See Appendix B of the Oracle Fusion Middleware Developer’s Guide for Oracle SOA Suite.

Clustering. Clustering of Oracle ESB is not a trivial thing to do. Only cluster if needed from QoS (Quality of Service) reasons such as high availability, failover, and throughput. Mind non-concurrent adapters such as FTP and File adapters when clustering.

Versioning. Oracle ESB 10g does not support versioning natively. You can include the version number in the ESB project name and deploy it as new flow alongside older versions. In OFM 11g mediators are part of composites and therefore versionable.

Transactionality. Transactionality -including support for XA- of ESB in 10g is dependent on several factors and can therefore be somewhat complex. These factors include the mechanism (through BPEL PM, ESB, or other technology or client), binding protocol (SOAP versus WSIF) used to invoke ESB flows, use of synchronous or asynchronous routing rules, use of different ESB Systems in an ESB project, and so on. Read Oracle’s SOA Suite Best Practices Guide and this presentation on transactions, error handling and resubmit.

Oracle’s best practices guide. Read Oracle’s SOA Suite Best Practices Guide for more tips and tricks.

Next blog in this series will be about security and identity- and accessmanagement in a SOA-environment.

Friday, July 17, 2009

Installing JDeveloper 11g

Two things I ran into when installing and configuring Oracle Fusion Middleware JDeveloper 11g that are worth mentioning:

  • Setting the User Home Directory. As documented in the OFM 11g Installation Guide you can specify the user home directory which is used as default location for new projects and in which JDev will store user preferences and other configuration files. If you explicitly set this location on a Windows system using the ide.user.dir variable in the jdev.boot file, then make sure you use a notation like D:/workspace/ofm11g and not D:\workspace\ofm11g. Using backslashes results in the user dir [OFM 11g Home]\Middleware\jdeveloper\jdev\bin\workspaceofm11g being used instead of D:/workspace/ofm11g.
  • Installing Additional Oracle Fusion Middleware Design Time Components. When installing additional OFM design time components such as WebCenter and the SOA Suite Composite Editor make sure you restart after installation of each single component. Do not install the WebCenter and SOA Suite editor without restarting in between. If you do only one of the additional components will be visible next time you start JDev.

Once you’ve downloaded OFM 11g from OTN, installation is easy and straightforward.

Wednesday, July 8, 2009

Best practices 2 – Web Services

This is the second post in our SOA and BPM best practices series. This blog is about Web Services and provides a mix of general tips and more specific tips for Web Services that are implemented using Java and JEE. You can find the first blog in this series here.

Approach. Decide upfront, based on the requirements and constraints, what approach for Web Service development best suits your situation: top-down or contract first, bottom-up, or meet-in-the middle.

  • Top-down or contract first. The starting point here is the contract of the Web Service: its WSDL. You either design it or it is provided as a 'given fact'. From the WSDL you generate the implementation. If the contract frequently changes, regeneration of the code can cause difficulties since the implementation is overridden. If you use this method, make sure you don’t change the generated artifacts.
  • Bottom-up or implementation first. The starting point is the implementation; all Web Service artifacts "such as its WSDL’s" are generated. This is a fast approach when you want to expose existing components as Web Service. However, you need to be careful because you have limited control over the generated Web Service artifacts and it is therefore easy to break an interface if the Web Service is regenerated.
  • Meet-in-the-middle approach. Here you define both contract and implementation yourself and later on create the glue between them. In case of Java you can use JAX-WS and JAXB APIs and code to create this glue. This is a very flexible approach: you can change both the WSDL and the implementation. It requires more work in the beginning, but is easier to change later on.

Compliance. A Web Service that isn’t standards-compliant is less (re)usable. Make sure your Web Service is compliant to the WS-* standards by using the WS-I profiles (Web Services Interoperability Organization).

Exposing operations. Don’t expose all methods as Web Service operations by default when using a bottom-up or meet-in-the-middle approach. Only expose those methods that are actually needed by service consumers. This promotes encapsulation and prevents access to ‘internal’ methods.

Products. Nowadays most products and technologies support Web Services. Keep their pros and cons in mind when deciding what technology to use. Java for example provides better support and a better runtime for Web Service development and XML-processing than relational databases.

Large XML documents. Avoid creating Web Services that receive, process, and/or send very large XML documents. XML processing is resource-intensive and relatively slow and therefore not well equipped for handling bulk data. Use other technologies such as database technologies or ETL tools for that purpose.

Quality of Service (QoS). It’s easy to develop basic Web Services-but it’s hard to make them robust, secure, and scalable (enough). Address these QoS (or non-functional) issues in the beginning of the project instead of discovering that requirements are not met at the end of your project.

Annotations. Be careful when using vendor-specific annotations (as opposed to the general annotations defined in the JAX-RPC, JAX-WS, and JAXB standards). Although vendor-specific annotations such as those in WebLogic can be very powerful they break portability of Web Services and tie them to a specific runtime.

Migration to WebLogic. See this blog for migrating JAX-WS Web Services from JDeveloper 10g/OC4J to JDeveloper 11g/Weblogic. Note from the blog that a bottom-up approach was used. After migration the WSDL was changed (among others the namespaces were changed) causing the invocation to fail. This is a typical example illustrating the pros of using a top-down or meet in the middle approach.

Next post in this series will be about best-practices for Oracle ESB and Mediator (FMW 11g).

Monday, June 29, 2009

Best practices for BPM, SOA and EDA

While visiting ODTUG Kaleidoscope 2009 in Monterey and talking to fellow BPM, SOA and EDA adepts I got this idea about creating a best practices and lessons learned blog series. This first blog is dedicated to best practices in the BPM and SOA-space based on cases from a presentation by Lonneke Dikmans. Subsequent blogs will dive into best practices and lessons learned for a specific product, methodology or technology.

Case I: Introducing BPM. Mistake: Organizational impact underestimated. Explanation: Successful delivery of BPM project, business heavily involved. Never used because they realized after delivery that changes in both the organization and the software were needed. Best practice 1. BPM and SOA are about business, IT and humans. Observe how people work, don’t just ask them.

Case II: Notifications. Mistake: Dependencies between processes modeled directly in the processes itself. Explanation: Process flow sometimes is influenced by other processes. This was modeled into every process: this makes processes tightly coupled to each other and hard to change. It resulted in deadlocks. Best practice 2. Use events to notify running processes. Best practice 3. Monitor & avoid exceptions.

Case III: New technology. Mistake: Use BPEL as a general purpose language. Explanation: BPEL is a domain specific language; it was designed to orchestrate (web) services. Someone coming from a homogeneous -for example PL/SQL environment- in their back office, could decide to rewrite everything in BPEL, even the service implementations. The progress of such a project is very slow, and doing things that used to be easy becomes very hard. Best practice 4. BPEL is a Domain Specific Language (DSL); use BPEL for orchestration only. Best practice 5. Use an Enterprise Service Bus (ESB) to expose services to consumers including BPEL. Best practice 6. Use Java for service implementation. Best practice 7. Use PL/SQL for persistent data manipulation and data integrity rules. Best practice 8. Use rules when you need customization, inference or when business rules are volatile.

Case IV: Quality of Service. Success: involve administrators early. Explanation: Someone designed their first SOA project with quality of service in mind. In production all the non-functional demands were met. Best practice 9: Design architecture for Quality of Service from the start … but only what you really need! Not everyone needs clustering, fail-over, high-availability, and so on.

Case V: Domains. Success: combine a top down with a bottom up approach. Explanation: By defining 6 business domains and one supporting domain, the service taxonomy and event definitions were easier to keep track of. Also defining an owner for some of the services and design guidelines for services that cross domains become possible. Best practice 10. Use domains and layers to facilitate making a taxonomy of services and defining design guidelines.

Conclusion. Think big, start small. Meet in the middle requires aligning Business, IT and People. Architects can be intermediaries. Sharing knowledge and experience is necessary.

The next blog in this series will dive into Web Services best practices.

Friday, June 26, 2009

ODTUG Kaleidoscope 2009

ODTUG Kaleidoscope 2009 is almost coming to an end at the time of this writing. After some chilly days the sun started to shine in Monterey and turned this great event into an even better one. As mentioned almost every day, ODTUG is one of the few conferences that has grown compared to last year. This was my first visit to ODTUG. I actually thought it would be as big as Open World :-) But it’s about a hundred times smaller. And actually that’s cool; much more intimate. Here you really get the chance to speak to product managers, interact with peers, and meet lots of new and interesting people!

So what were the highlights?

SOA and BPM Symposium on Sunday. For the first time there was a separate symposium and track dedicated to SOA and BPM. It was put together by ACE Directors Lonneke Dikmans, Lucas Jellema and Mike van Alst. Although the APEX and database tracks attracted more audience, we had a very interesting and interactive day with a mix of newcomers and SOA-adepts. The day was split into a business and technology part. Breakout sessions were mixed with great presentations by Demed L’Her, Geoffroy de Lamalle, and Clemens Utschig. Read about the results -SOA and BPM approaches- on Oracle Wiki.

Fusion Apps demo. The first official demo of the upcoming Fusion Apps. It looked really smooth! It’s build on top of the new Oracle Fusion Middleware 11g stack (WebCenter, ADF, SOA Suite). Lot’s of social networking capabilities and interaction. Expect to see more of this. Some technology stats: approximately 11,000 task flows, between 5 and 6 thousand tables, tens of thousands of ADF BC View Objects, and so on.

Oracle ACE dinner. Great dinner followed by a bonfire and s’mores on the beach!

Presentations in the SOA and BPM track. Lots of interesting presentations here. I was an ambassador for Lonneke’s presentation and did a presentation myself on SOA in a database-centric environment. Also great presentations by Roman Dobrik on BPEL development patterns, Chris Judson on canonical data models, Mauricio Naranjo on a government SOA project in Latin America, Samrat Ray on SCA in SOA Suite 11g, Mark Simpson on tools for business processes, and Lucas Jellema on SOA in an Oracle classic stronghold.

Meeting fellow geeks. You want to meet people that drive cars with license plates like “BPEL” or “WEB 2 OH”? Find them at ODTUG!

Great conference! Thanks to everyone and looking forward to meet everyone again at Oracle Open World 2009!

Thursday, June 18, 2009

Passive adapters in Oracle ESB that won’t be activated

Configuring SOA Suite 10g for high availability (HA) isn’t the most easy thing to do. Several administrators I spoke with and worked with in projects brought this up. I really hope that FMW 11g -besides all the new functionality, enhancements and support for new standards such as SCA- also makes things like HA easier to configure.

One particular issue we recently ran into in one of our projects has to do with the use of non-concurrent adapters in Oracle ESB when upgrading our clustered environment from to Non-concurrent (or singleton) adapters are adapters that cannot run in an active-active configuration since the underlying infrastructure does not provide a good locking mechanism. Examples are file and FTP adapters. JMS and database adapters on the other hand support concurrency. For non-concurrent adapters you have to ensure that there is only one adapter instance active at runtime. Otherwise you could have two active file adapters both reading the same file, starting two ESB flows instead of one. Futhermore, you want to have fail-over. If the ESB RT (runtime) node on which the active file adapter is running (or adapter itself) fails, the passive adapter on another ESB RT node should be activated. In earlier SOA Suite 10g releases you had to install and configure a separate ESB RT for this (ESB Singleton) and deploy non-concurrent adapters to this separate node. Real overkill. Fortunately, in later versions you could deploy non-concurrent adapters to the existing ESB RT’s and configure these adapters in an active-passive configuration by setting the clusterGroupId property. The jGroups protocol is then used so that only one instance of all adapters that have the same clusterGroupId value will be activated.

When we upgraded to SOA Suite none of our file adapters in the acceptance environment was activated anymore! After some investigation it seemed that ESB uses its own jGroups configuration instead of the jGroups configuration as specified in the global jgroups-protocol.xml file (as was the case for ESB That isn’t a problem by default. However, in our case both our test and acceptance environment are clustered and both run in the same network. The internal jGroups configuration of both test and acceptance by default probably use the same ip and subnet addresses. Meaning all adapters of all ESB projects in the same network with the same clusterGroupId are all put in the same active-passive configuration. For ESB project “A” only one file adapter instance for test was active, the same file adapters for ESB project “A” for acceptance were all in passive mode. Luckily you can specify the useJgroupConfigFile property for an ESB endpoint and set it to true to enforce using the jgroups-protocol.xml configuration file; as was the case in ESB Then configure a different ip and subnet address combination for test and acceptance. That way the non-concurrent adapters in the same ESB projects but in different environments are separated when they have the same clusterGroupId. Another workaround would be to include the environment name in the clusterGroupId value, e.g. MY_ESB_TEST_ID and MY_ESB_ACCEPTANCE_ID.

Tuesday, June 2, 2009

Web Services article on OTN

Oracle recently released Oracle Enterprise Pack for Eclipse (OEPE) 11g. OEPE is a certified set of Eclipse plug-ins that is designed to help develop, deploy, and debug applications for Oracle WebLogic Server. I wrote an article on OEPE’s Web Service capabilities, and more specifically its support for the JAX-WS and JAXB standards. The article includes a step-by-step tutorial, explains different approaches to Web Service development, and concludes with several best practices. You can find the article on Oracle Technology Network (OTN).

Monday, April 20, 2009

Publishing process information using sensors and AQ

Sensors in Oracle BPEL PM provide a nice mechanism to publish in-flight process data in a fire-and-forget fashion. The information that is published (sensor data) is separated from the communication mechanism (sensor action) which is a good thing: it promotes separation of concerns. JMS and BAM are supported communication mechanisms to publish sensor data, native Oracle AQ is not. It is however possible to use JMS over AQ instead of in-memory or file-based JMS for transactional and persisted message delivery and integration with database components. I’ve described the involved steps to publish sensor data from BPEL PM to AQ-backed JMS and subscribe to these events from ESB since it is bit of a configuration-nightmare. These steps apply to SOA Suite

  • Create a new AQ queue or topic (multi-consumer queue) in the database; e.g. MyQueue. These AQ destinations are typically created in the schema “JMSUSER”. Make sure its payload is of type JMS text (SYS.AQ$_JMS_TEXT_MESSAGE) and not XMLType. If not already there, configure a connection pool and data source in OC4J that refers to the database schema that owns the AQ destinations.
  • Create a Database Persistence provider in OC4J. In Enterprise Manager go to oc4j_soa –> Administration –> Database Persistence and click “Deploy”. You could e.g. use JMSUserRP as Resource Provider name and JMSUserRA as Resource Adapter name. Point to the data source as described in the previous step.
  • Create a Resource Adapter connection factory for the newly created Resource Adapter by navigating to oc4j_soa –> Applications –> Default. Select the module from the Resource Adapter list that you’ve just created. Create a new Connection Factory and new Administered Object. Use the queue interfaces -e.g. javax.jms.XAQueueConnectionFactory- in case you publish to an AQ queue, and the topic interfaces if you use a multi-consumer AQ queue -e.g. oracle.j2ee.ra.jms.generic.AdminObjectQueueImpl. Examples of JNDI names are JMSUserRA/XAQCF for the connection factory and JMSUserRA/Queue for the administered object.
  • Define a sensor and sensor action in BPEL PM. Either choose “JMS Queue” or “JMS Topic” as “Publish Type” in the sensor action. Enter the connection factory JNDI name from the previous step as JMS Connection Factory, e.g. JMSUserRA/XAQCF and the JNDI location of the AQ queue or topic. The last value should be like: [JNDI name of the "Administered Object"]/[either "Queues" or "Topics" depending on the AQ type]/[Name of the AQ queue or topic]; for example JMSUserRA/Queue/Queues/MyQueue.

Now, you can create an ESB project that subscribes to the published sensor events and actually does something useful with it.

  • Create an inbound JMS adapter in a new ESB project. Enter the Resource Adapter connection factory -e.g. JMSUserRA/XAQCF- as JNDI name in the first step and select the correct AQ destination from the browse list in the next step.

And voila you’re done.

There are two sides to this. First of all -not the most important one however- is the technical implementation and configuration details as described above.

Secondly it provides a nice pattern. You can use sensors to publish relevant in-flight process data from running instances. This promotes decoupling since BPEL PM does not (need to) know who is interested in the sensor data. It might be that nobody is interested in it. Secondly, you can use ESB’s fan-out and content-based routing patterns and connectivity features to route process data to all interested components, possibly filtering and transforming its data.

Thursday, April 2, 2009

Behind the scenes of OBUG, literally!

This week I attended the OBUG Benelux Connect 2009 user conference. OBUG stands for the Oracle BeNeLux (Belgium, Netherlands, and Luxembourg) User Group. The conference was held at the Metropolis Cinema Complex in Antwerp. One of the sessions I planned to attend was the “Report from the R&D Lab – Analyzing the upcoming 11g Release of Fusion Middleware” session by Lonneke Dikmans and Lucas Jellema. The presentation was held on behalf of WAAI, a collaboration of four Dutch companies in which Lucas, Lonneke, and I participate. Goal of the WAAI collaboration is to share, bundle, and expand knowledge on the upcoming Fusion Middleware 11g release. In this presentation, the initial findings and research of WAAI would be made available to the audience, including real world examples and demonstrations. At least, that was the plan...

From Kuassi Mensah from Oracle -who had an earlier presentation on Database Web Services and SOA- we heard that you couldn’t hook up your laptop. That meant no demo’s! And nobody had informed the speakers about this! Even stranger since Javapolis is held at the same venue and tons of demo’s are given there each year. While Lonneke, Lucas, and I chatted about this we got the idea that I could do the demo backstage while they did the presentation. We asked the Metropolis crew, who was more than willing to help out. Thanks a lot for that guys!

Time was running out while Lonneke and Lucas showed me what to demo and virtual machines, SOA composites, schema’s, and other stuff quickly moved from one laptop to another. Lonneke and Lucas stayed remarkably calm given the situation. Just a few minutes before the presentation we crawled through a secret hatch in the back of the cinema. While I “installed” myself right next to the digital projectors, professional sound systems, buttons, wires, and tons of popcorn Lonneke and Lucas went back to do the presentation. Guided by their instructions during the presentation -while we couldn’t see each other- we still managed to show Fusion Middleware 11g stuff to the audience :-) It turned out to be a great presentation after all, despite the organization or lack of it. Also quite educational when it comes to the inner workings of cinemas and good for the friday afternoon talks with colleagues :-)

Thursday, March 12, 2009

Custom adapter article on OTN

A while ago someone asked me if you could create your own adapter and use it from Oracle SOA Suite. More specifically if you could create an inbound e-mail adapter (not available out-of-the-box in SOA Suite) that polls for new mail messages and use it as activation agent for starting a BPEL process or ESB flow.

I knew this was possible -at least the possibility was documented- and searched for a how-to. I couldn’t find one explaining all steps involved. However, I did find lots of questions on the OTN forums asking how to achieve this. I thought it would be nice to write an article about it after I got it working. It’s published on OTN.

The article includes a step-by-step tutorial for building the adapter and plugging it into Oracle SOA Suite components such as BPEL PM and ESB. The article also briefly discusses adapter support, offerings, and convergence in future Oracle Fusion Middleware releases that incorporate former BEA products.

Drop me a line if you’re interested in an example for using a outbound adapter instead of an inbound one. I’ll see if I can add such an example in the future.