Sunday, October 25, 2009

Presentations Oracle OpenWorld 2009

Oracle OpenWorld and Oracle Develop 2009: It’s a Wrap! Just like last year an awesome event! Read about some of the highlights and experiences in this previous blog.

Lonneke Dikmans and I presented the following two sessions on Oracle OpenWorld 2009 that can be viewed here:

Approach to Oracle Fusion Middleware 11g
This session presents an approach to the strategic Oracle Fusion Middleware 11g components, using a customer case and in-depth knowledge of the new Oracle SOA Suite 11g. The case study covers a car leasing firm that migrated from Oracle SOA Suite 10g and Oracle WebCenter 10g to Oracle’s strategic platform with Oracle WebLogic solutions and Oracle Application Development Framework 11g.

Topics:

  • Overview of the customer’s SOA environment and infrastructure
  • Migrating to Oracle WebLogic solutions and Oracle Application Development Framework 11g and how a SOA environment affects the transition
  • New features of Oracle SOA Suite 11g and how to migrate to it, with a focus on Oracle Service Bus and Service Component Architecture

Portals: The Way to Realize User Experience in a Service-Oriented Architecture? (IOUG/ODTUG)
Portals seem like a natural fit for realizing the front end in a SOA. This session describes two customer cases in which portals were used to present services to end users. In the first case, a Dutch municipality used Oracle Portal in conjunction with Oracle SOA Suite to offer personalized information and products and services to citizens. In the second case, a car leasing company used Oracle WebCenter as a process portal for users for part of its procurement process. In both cases, the portal did not offer the expected benefits to the organization or the end users. The presentation covers possible use cases for the application of portal technology and the critical success factors for portals in SOA and BPM environments.

Tuesday, October 20, 2009

Best practices 4 – Security and Identity Management

This is the fourth blog in a series of BPM and SOA best-practices. The previous blog in this series was on Oracle ESB and Mediator. This blog will discuss security and identity management in an SOA-environment.

So what exactly is it?
IT-security has become more and more important over the last decades. While at first security was frequently treated as a necessary evil, nowadays it has matured into a separate area of expertise. There can be a lot of confusion about what exactly encompasses security and identity management; everyone has a different view on it. When discussing these topics first agree upon its scope before you delve into it. In this blog it will be divided into:

  • Identity management
  • Authentication; including Single Sign-On (SSO)
  • Authorization
  • Logging and monitoring
  • “Hard” security; more technical security including confidentiality and integrity of data, usage of firewalls, IDS/IPS products, and so on.

The first three together encompass Identity and Access Management (IAM). “Soft” security like creating security awareness, training employees, applying physical security to buildings and IT-assets, and availability of IT-assets (together with confidentiality and integrity forming the so-called CIA-triad) are out of scope for this blog.

Security and SOA
When compared to traditional software development the important question is not whether security in an SOA-environment is important, but if it is any different -and should therefore be designed differently? The answer to both questions is yes.

To understand why security should be handled differently we need to understand the characteristics of SOA that are key to the security aspect compared to those of traditional software development:

  • Next to human-machine interaction there is more machine-machine interaction. This means there is a greater need for automated security mechanisms for purposes of authentication, authorisation, encryption, and so on.
  • A SOA-environment generally contains more intermediary stations such as ESB’s and other middleware components. There are more locations for users and administrators to view -possibly confidential- message contents such as credit card information. In this case transport security alone is not enough.
  • How can you manage and control various (external) clients that want to access data and/or services if systems are loosely-coupled? E.g. not every client must be allowed to invoke a banking service.
  • SOA results in more Straight-Through-Processing (STP) meaning processes are more frequently executed in an entirely automated fashion without human interference. Good security is key since consequences of possible security breaches could be detected later on. Also, the consequences can be graver due to the possible large amount of process instances.
  • Services are invoked by both internal and external consumers. A service’s security level is usually determined by its owner. In case of external services, security will be largely determined and enforced outside an organization’s own span-of-control. The level of security determines the consumers trust: “What happens with my data if a service is not secured?”, “Can I trust a service’s result?”, and so on.

These differences clearly impact the way security should be designed within an SOA-environment. It furthermore warrants the need for an integrated and holistic approach on security in an SOA-environment. Use a layered approach to security as for example promoted by the defense-in-depth strategy.

Externalize security
For a number of reasons it is a good design-principle to externalize identity management and security; even more so in an SOA-environment that frequently consists of heterogeneous infrastructure. Every service having its own IAM and security design and implementation leads to a suboptimal solution, more overhead, and greater chance for security breaches. If security is part of the infrastructure’s components -for instance intertwined in an ESB product- different products will most likely also support different security standards and protocols. E.g. an application server might support SAML version 1.1, the WS-Security Username Token profile, transport security using HTTPS, and LDAPS while the ESB product supports SAML version 2.1, the WS-Security X.509 Token Profile, message security using XML DSIG, and LDAP rather than LDAPS. This is worsened in case external infrastructure supports yet another subset of standards and protocols. This can cause poor interoperability. Use a separate -specialized- component for security instead. This promotes both reuse of better security throughout your SOA-environment and promotes separation of concerns.

The agents and gateway patterns are very well suited to externalize security. Use gateways for appliance of common security policies and agents for more service-specific security policies.

Security classification
Define a limited set of security classifications; for example based on the CIA-triad (confidentially, integrity, and availability) ranging from e.g. “public” to “highly classified”. Determine a minimum set of security measures per classification level. For each new service determine its classification levels; this is usually the responsibility of the service owner. Make classification levels part of your service repository and governance processes. This results in more understandable security regulations, gives better insight in the current and future security of your environment, better reuse of existing security policies, and prevents reinventing the wheel when establishing security for new services. Most important it results in just the right amount of security to be applied; thereby saving money (strive for the lowest possible classification levels without endangering security) while applying (just) enough security.

Transport versus message security
There are roughly two types of security for message invocation: transport and message security. Transport security secures a message only during transport between service consumer and service provider using the transport layer; e.g. using HTTP over SSL or TLS (HTTPS). That means messages are not protected in intermediary components such as an ESB and not protected directly after being received by the endpoint. Message security secures the message itself, mainly through encryption of the payload using for example public and private keys. Since message security can provide security in the scope you want to -so also in intermediaries and after the message has been received- it is generally preferable over transport security. Both transport and message security can be used for authentication (e.g. signature based on certificates), integrity (e.g. digest), and confidentiality (e.g. encryption).

Standards
Maybe trivial but very much important: use standards to promote interoperability. This includes the usage of security standards such as LDAP(S), HTTPS, SAML, XML DSIG, WS-Security (WSS), and other WS-* standards. Using standards results in secured services being reused by (both internal and external) heterogeneous infrastructures. Next to technical standards there are also a number of security reference architectures and principles and guidelines you can leverage.

Before we wrap up some best-practices per area.

Identity management
Use a centralized identity management repository. This avoids duplicate user management and possible inconsistencies. Divide users into different identity types if needed -such as employees, customers, suppliers, and so on since different rules and administration may apply to each category. Be careful in allowing external IT-assets and organizations direct access to your identity management solution. Consider identity provisioning in such cases as external hosting to minimize security risks.

Usually you want a service provider to authenticate the original service consumer (user identity) and not some intermediary component such as an ESB. Implement identity propagation of tokens, username/password, etc. so the service provider authenticates and authorizes the identity of the original user that invoked the service. That implies that all intermediary components between service consumer and provider need to be able to transport identity tokens -and possibly transform these from one format to another (e.g. from SSO token into username/password).

Especially in case of authenticating and authorizing external organizations consider the trade off between using specific identities (Mr. X or Mrs. Y) versus more general identities (organization Z). Specific identities result in better traceability and can provide for more fine-grained access control while more general identities can result in less administration: the number of different identities to manage and synchronize decreases dramatically.

Avoid generic identities such as “consultant” and “trainee” all together.

Authentication
Define a limited set of authentication levels and differentiate on information (password), possession (token, physical key, text message to a phone), and attribute (voice, fingerprint) as mechanisms. E.g. “basic-level” authentication requiring information, “middle-level” authentication requiring information and possession, and “high-level” authentication requiring attribute or possession together with a check of ownership.

Most organizations promote SSO to improve user-friendliness and provide for better user-experience. Determine however if you want SSO for your most classified IT-assets. SSO can provide access to a multitude of IT-assets due to a security breach in only one of the IT-assets. A best-practice is to grant access to IT-assets based on authentication level; if you authenticated using basic authentication, SSO will only grant you access to IT-assets requiring the same or a lower authentication level; not to IT-assets requiring “high-level” authentication.

The SSO-provider needs to be verified and trusted before you can hand over authentication to that provider.

Authorization
Don’t tie rights to IT-assets directly to user identities to avoid high maintenance costs, inflexibility, and lock-in of users. A good design-principle is to use a form of Role-Based Access Control (RBAC) to decouple authorization. Use attributes such as organizational units and/or job titles that do not change frequently over time as intermediary layers in the authorization model. Assign rights in IT-assets to entities in this layer (e.g. organization unit and/or job title) and vice versa assign user identities to these intermediary layer(s). Design the authorization model per identity type (customer, employee, supplier, etc.).

Base authorization on the work/function someone or some organization needs to do; no more, no less. Avoid “super-users”; usually management and/or IT-staff that have gathered much more privileges over time than they’re entitled to. Increase security by assigning more than one role to the various steps in sensitive processes thereby preventing one user to be able to execute the process entirely.

Logging and monitoring
Functionality and processes in an SOA are spread over different loosely-coupled components. Some logging and monitoring needs to be executed on a higher level than on that of an elementary service; but rather on composite service or process level. This gives rise to the need for a central logging and monitoring component that is able to combine and correlate decentral logs and enables monitoring on process-level. The Wire Tap pattern can be used to publish logs, sensors, and other types of messages from services and middleware to the central monitoring component. Notifications can be managed and implemented separately of the logging and notifications can be published by this central monitoring component. Note that this requires synchronization of date and times of the several components that are managed to enable correct correlation. Determine for every service if it is allowed to continue operation in case the central monitoring component fails. Is it e.g. allowed from a security point-of-view to use decentral -localized- logging and monitoring in case the central monitoring component is down?

“Hard” security
A best-practice is to divide security in a number of layers. Chart possible vulnerabilities, threats, and corresponding principles and guidelines to counteract them. This approach results in a more effective and efficient security. Examples of such layers are: network security, platform security, application security, integrity & confidentiality, content security, and mobile security. Examples of principles and guidelines are applying compartioning (network security), to have a central list of allowed and non-allowed file extensions for inbound and outbound traffic (content security), and the use of hardening (platform and application security).

Oracle’s direction
In case of Oracle’s SOA product stack (SOA Suite 11g) security is externalized from almost all products and can be applied using policies. These policies can be configured in a management console and reused by processes and services that are packaged and deployed as SCA composites and components. These policies are based on standards such as WS-Security. Oracle Service Bus (OSB) contains security functionality though. As stated in OSB’s SOD: “The ability to attach, detach, author and monitor policies in a central fashion will be extended to the Oracle Service Bus (as it is has been extended to all other components in the SOA Suite 11g).” In any case you can already secure OSB projects using OWSM.

Sunday, October 11, 2009

Some tips and tricks on migrating SOA Suite 10g to 11g

Just a few things I noticed last week when migrating BPEL and ESB projects from SOA Suite 10g to SCA composites and components in SOA Suite 11g.

Custom XSLT functions
Just like in SOA Suite 10g you can expose Java methods as custom XSLT functions and use them at designtime in the XSLT Mapper of JDeveloper 11g. An example is a custom XSLT function transforming a local bank account number into its corresponding IBAN format. While the mechanism to expose custom functions is the same as in SOA Suite 10g, the exact implementation in SOA Suite 11g is little bit different. Custom XSLT functions are packaged in a JAR file including an extension XML file describing the functions. By specifying and adding the JAR file in JDeveloper you can use these functions designtime. You then place the JAR file on the application server running SOA Suite so they can also be executed at runtime. See this blog by Peter O’Brien for more detailed steps for using custom XSLT functions in SOA Suite 10g. Migrating custom XSLT functions from 10g to 11g needs to be done manually and involves the following steps.

  • Edit the extension XML file and replace the elements and namespaces according to the new XSD describing custom XSLT functions.
  • Rebuild the extension JAR and add it to JDeveloper 11g using Tools –> Preferences –> SOA from the menu. Restart JDeveloper. Inspect the log window to see if JDeveloper correctly parsed the extension JAR file. There will be an error or warning in case of an incorrect configuration.
  • The custom XSLT functions are now listed in the XSLT Mapper.
  • Place the JAR file in the BEA_Home/user_projects/domains/domain_name/lib directory to make them available at runtime.



See OTN for more detailed information.

Sensors and tracking composite instances
There are a few ways of tracking composite instances and/or relating these to business entities such as orders or invoices that are processed by the service. End-users frequently want to know -given a particular business entity- what process instance(s) are related to it. In SOA Suite 10g -and more particularly BPEL PM- you could use the setIndex function within an embedded Java activity or use sensors to publish this information. You would of course need a subscriber to process these sensor values and store the relation between process instances and business entities somewhere.

In SOA Suite 11g you have this great new feature of using composite sensors to achieve this. See for example this blog by Demed L’Her. Another way is to set the name of a SCA composite instance. By default the instance name is not set and the corresponding name column in the EM 11g Fusion Middleware Control is empty. You can set the composite instance name at designtime from a Mediator component or BPEL component using the setCompositeInstanceTitle XPath function or equally named Java extension. Just like composite sensors you can then search for composite instances based on its name. This is documented in Oracle’s Fusion Middleware Administrator’s Guide for Oracle SOA Suite.



Note that EM 11g Fusion Middleware Control only shows sensor actions and values that are stored in the database. As stated in the Developer’s Guide: “If you created JMS sensors in your BPEL process, JMS sensor values do not display in Oracle Enterprise Manager Fusion Middleware Control Console. Only sensor values in which the sensor action is to store the values in the database appear (for example, database sensor values).”

Domain Value Maps
Another great improvement in SOA Suite 11g is that Domain Value Maps (DVM) are now available to all components in SCA and not only limited to ESB (now Mediator) as is the case for SOA Suite 10g. You store DVM’s locally in your project or use MDS for this in SOA Suite 11g. Using DVM’s from XSLT transformations has slightly changed. More particularly the namespace and XSLT function name. If you automatically migrate SOA Suite 10g projects into SCA components and composites by using the migration tool or reopening the projects in JDeveloper 11g this will be handled automatically. However, if you rebuild SCA composites manually using the artifacts from your previous SOA Suite 10g project you have to take this into account and change the namespace and XSLT function name yourself.

Sunday, September 6, 2009

Migrating Web Services from JDeveloper 10g to 11g

Although most of the migration steps from JDeveloper 10g/OC4J to JDeveloper 11g/WebLogic are automated, there are some exceptions. One such case where you have to roll up your sleeves and do some coding are EJB 3 Session Beans that are exposed as Web Services using JAX-WS annotations. JDeveloper 10g generates a separate Java interface containing JAX-WS Web Service annotations when using the EJB 3 Session Bean Wizard and selecting the option to create a Web Service interface. Note that this option isn’t available in JDeveloper 11g, but you can right-click an EJB Session Bean and select the generate Web Service option that will give you the same result.

When migrating the JDeveloper 10g workspace to a JDeveloper 11g application -by opening the jws file in JDeveloper 11g- most of the migration work is automatically done; for example the workspace and project files are updated and existing deployment plans are converted.


If you then deploy the project to the integrated WebLogic server everything seems to deploy and run just fine. However if you expand the deployment in the WebLogic Server Administration Console you’ll see that there are no web services listed, only EJBs.

Here are some simple steps to correct this:


  1. Remove the Java interface containing the JAX-WS Web Service annotations that was generated in JDeveloper 10g and remove the interface from the implements statement in the EJB Session Bean class.
  2. Add a @WebService annotation to the EJB 3 Session Bean containing the following arguments: name, serviceName, and portName. Check the WSDL of the current deployed Web Service generated with JDeveloper 10g to obtain its metadata such as name, namespace, and portname. These values can be used in the new @WebService annotations of the migrated Web Service in JDevloper 11g so that Web Service clients don’t break due to different namespaces, portnames, endpoints, and so on. You can also use other annotations to influence the endpoint and WSDL of the Web Service. However mind that some annotations are WebLogic-specific and not part of the JAX-WS standard.
  3. Optionally add other JAX-WS annotations as needed.
  4. Replace the JAX-RPC project libraries with the JAX-WS Web Services library.
  5. The current WebLogic JAX-WS stack -more specific the JAXB implementation- does not support java.util.Map and java.util.Collection family types as Web Service method return or input types. Deployment fails with the message “java.util.Map is an interface, and JAXB can’t handle interfaces” and “java.util.Map does not have a no-arg default constructor”. A logical workaround would be to replace these types with concrete implementations that have a no-argument constructor; for example java.util.HashMap. Although deployment then succeeds, the information contained in the map is lost at runtime when requests/responses are (un)marshalled. A final workaround was to replace the java.util.Map with a two-dimensional array. Although I’m not really happy with this workaround, it works for now.


Deploy the project and voila, the WebLogic Server Administration Console shows both EJBs and Web Services.



So “no coding required”, or just a little bit perhaps :-) ?

P.S. Some useful links:

Friday, July 31, 2009

Exception-handling in JAX-WS Web Services on WebLogic

There is more to exception-handling in JAX-WS Web Services than meets the eye. Especially when throwing custom (checked) exceptions from your Java methods that are exposed as Web Service operations. There’s a nice blog by Eben Hewitt on using SOAP Faults and Exceptions in Java JAX-WS Web Services. I recommend reading it; especially when you get the following error: javax.xml.ws.soap.SOAPFaultException java.lang.NoSuchMethodException. This is one of the issues you might run into when migrating from Oracle Application Server (OC4J) to Oracle WebLogic Server.

Wednesday, July 29, 2009

Best practices 3 – Oracle ESB and Mediator

This is the third post in our SOA and BPM best practices series. This blog provides best practices for Oracle ESB (Oracle Fusion Middleware 10g) and its successor (when it concerns routing and transformation): the mediator component in SCA (Oracle Fusion Middleware 11g). The previous blog in this series is about Web Services best practices.

Use a bus. Maybe a bit of an open door, but there are still projects that stall, exceed budget, or fail all together since no ESB is used and “SOA-plumbing” is implemented (or tried at least) in an orchestration tool, custom logic, and so on. Use an ESB for decoupling, virtualization, abstraction, transformation (data as well as protocols), and content-based routing. Decouple this type of functionality from your orchestration and workflow.

Migrating from OFM 10g to OFM 11g.

  • If you don’t migrate to SCA and have used Oracle ESB as a stand-alone ESB then migrate to OSB. This will require reimplementation of OESB flows as OSB flows.

If you migrate to SCA:

  • For non-reusable ESB flows that perform “internal” transformation and routing functionality within the SCA runtime: create a mediator component that is not directly exposed in its containing SCA composite and add your other components that use the mediator -such as BPEL components- to that composite. Open the OESB project in JDev 11g to create an initial composite.
  • For reusable ESB flows that perform “internal” transformation and routing functionality within the SCA runtime: create a composite containing only one mediator component that is exposed using a service. Other SCA composites can reuse this “mediator” composite. Open the OESB project in JDev 11g to create an initial composite.
  • For ESB flows that interact with the “outside world”; in other words connect the SCA runtime to other runtimes and/or external parties such as suppliers and clients: migrate to OSB.

Encapsulation and exposing operations. As with Web Services in general, do not expose all routing service operations and adapter operations. This promotes encapsulation; only expose what is or will be reusable. Also see this post about improved encapsulation in OFM 11g. In 10g, you cannot “hide” an ESB flow but you can minimize the operations that are invocable by disabling the option “Invocable from an external service”. In 11g, you can hide a mediator within its composite by not directly exposing it by making sure there’s no direct service and wire to it. This is achieved by disabling the “Create Composite Service with SOAP Bindings” option when creating a mediator component.

Data enrichment. Although data enrichment typically is something you would do in an ESB -for example when implementing VETO (validate, enrich, transform, and operate)- don’t use Oracle ESB for it. Through the lack of temporary variables it is not well suited for data enrichment when data comes from different sources. You can use the $ESBREQUEST variable to ameliorate this, but still this is not a great workaround. Use BPEL PM or OSB in 10g for complex data enrichments and OSB or SCA composites containing multiple mediator and/or BPEL components to achieve complex data enrichment.

XML. Create a public_html folder in every ESB project created with JDeveloper 10g and place non-generated XML artifacts such as XSLTs and XSDs in it. Leave generated XML artifacts such as TopLink descriptors from the DB Adapter in the (default) root folder. When editing mediators in 11g XSLT will automatically be created in an xsl directory and XSDs will be placed in a xsd directory.

Deployment. Use Oracle ESB Ant scripts to deploy to test, acceptance, and production environments. Use deployment plans to configure endpoint and adapter settings per environment (DTAP). Make sure you don’t mix Ant and JDeveloper deployment since it can cause problems in your ESB runtime. For SCA composites use configuration plans.

Structuring. Use ESB Systems and Service Groups in 10g to structure ESB flows. A possibility would be to use an ESB Systems per business domain and an ESB Service Group per project. For example: ESB System “Finance” that contains ESB Service Group “FIN_ESB_Process_Invoice”.

XSLT extension functions. Custom XSLT functions can be a powerful mechanism to implement your own transformation logic but it can also break portability when moving from one environment to the other due to the required configuration and deployment steps. The creation of user-defined extension functions in OFM 11g is different from 10g. See Appendix B of the Oracle Fusion Middleware Developer’s Guide for Oracle SOA Suite.

Clustering. Clustering of Oracle ESB is not a trivial thing to do. Only cluster if needed from QoS (Quality of Service) reasons such as high availability, failover, and throughput. Mind non-concurrent adapters such as FTP and File adapters when clustering.

Versioning. Oracle ESB 10g does not support versioning natively. You can include the version number in the ESB project name and deploy it as new flow alongside older versions. In OFM 11g mediators are part of composites and therefore versionable.

Transactionality. Transactionality -including support for XA- of ESB in 10g is dependent on several factors and can therefore be somewhat complex. These factors include the mechanism (through BPEL PM, ESB, or other technology or client), binding protocol (SOAP versus WSIF) used to invoke ESB flows, use of synchronous or asynchronous routing rules, use of different ESB Systems in an ESB project, and so on. Read Oracle’s SOA Suite Best Practices Guide and this presentation on transactions, error handling and resubmit.

Oracle’s best practices guide. Read Oracle’s SOA Suite Best Practices Guide for more tips and tricks.

Next blog in this series will be about security and identity- and accessmanagement in a SOA-environment.

Friday, July 17, 2009

Installing JDeveloper 11g

Two things I ran into when installing and configuring Oracle Fusion Middleware JDeveloper 11g that are worth mentioning:

  • Setting the User Home Directory. As documented in the OFM 11g Installation Guide you can specify the user home directory which is used as default location for new projects and in which JDev will store user preferences and other configuration files. If you explicitly set this location on a Windows system using the ide.user.dir variable in the jdev.boot file, then make sure you use a notation like D:/workspace/ofm11g and not D:\workspace\ofm11g. Using backslashes results in the user dir [OFM 11g Home]\Middleware\jdeveloper\jdev\bin\workspaceofm11g being used instead of D:/workspace/ofm11g.
  • Installing Additional Oracle Fusion Middleware Design Time Components. When installing additional OFM design time components such as WebCenter and the SOA Suite Composite Editor make sure you restart after installation of each single component. Do not install the WebCenter and SOA Suite editor without restarting in between. If you do only one of the additional components will be visible next time you start JDev.

Once you’ve downloaded OFM 11g from OTN, installation is easy and straightforward.