Showing posts with label WebLogic Server. Show all posts
Showing posts with label WebLogic Server. Show all posts

Sunday, December 10, 2017

Troubleshooting Oracle API Platform Cloud Service

One of the challenges when working in integration is troubleshooting. This becomes even more challenging with when you start using a new product.

Recently I worked with Oracle Product management (Thank you Darko and Lohit) to troubleshoot issues with an OAuth configuration of APIs in Oracle API Platform Cloud Service.

Setup

The setup was as follows:
  1. An API Gateway node deployed to Oracle Compute Cloud Classic as an infrastructure provider
  2. Oracle Identity Management Cloud Service in the role of OAuth provider
We setup an API with several policies, including OAuth for security. When we called the service, it gave us a '401 unauthorized' error.

Oracle API Platform Cloud Service troubleshooting

The Oracle API Platform Service offers analytics for each API. You can navigate there by opening the API Platform Management portal, click on the API you want to troubleshoot and click on the Analytics tab (this is the bottom tab).

Click on Errors and Rejections, after setting the period you are interested in. Usually when you are troubleshooting, you would like to see the last hour.

Different type of analytics in an API











Now you can scroll down to error distribution and see the errors that occurred. In this case, because I selected "Last Week" you see a number of different errors that occurred last week and how often they occurred. When you run your test again, you will see one of the errors in the distribution increase, giving you insight in the type of error.

Distribution of each error type









We tried different configurations, as you can see from the distribution, the graph tells us that the OAuth token was invalid and that in another case we had a bad JWT key. This mean we had to take a look at the configuration of the OAuth profile of the Oracle API Gateway Node. (see the documentation on how to configure Oracle Identity Cloud Service as OAuth provider).

OAuth token troubleshooting 

We had a token, but it appeared to be invalid. It is hard to troubleshoot security: what is wrong with our configuration? Why are we getting the erors that we get? When you successfully obtain an OAuth token, you can inspect it with JSON Web Toolkit Debugger. 
  1. Navigate to https://jwt.io
  2. Click on Debugger
  3. Paste the token in the window at the left hand side
JWT debugger with default token example










The debugger shows you a header, the payload and the signature.

Header 
Algorithm that is used, for example SHA256 and types supported (JWT for example)
Payload 
The claim is different per type. There are three types: public, private or registered. A registered claim contains fields like iss (issuer, in this case https://identity.oraclecloud.com/ ), sub (subject), aud (audience) etc. See for more information: https://tools.ietf.org/html/rfc7519#section-4.1
Signature
The signature of the token, to make sure nobody tampered with it.

Now you can compare that to what you have put in the configuration of Oracle Identity Management Cloud and the configuration of the Oracle API Gateway Node.


Oracle API Platform Gateway Node trouble shooting

Apart from looking at the token and the analytics it can help to look at the log files on the gateway node. The gateway node is an Oracle WebLogic Server with some applications installed on it.

There are several log files you can access.
  1. apics/logs. In this directory you find the apics.log file. It contains stacktraces and other information that help you troubleshoot the API.
  2. apics/customlogs. If you configured a custom policy in your API, the logfiles will be stored in this directory. You can log the content of objects that are passed in this API. See the documentation about using Groovy in your policies for information about the variables that you can use. 
  3. 'Regular' Managed server logs. If something goes wrong with the connection to the Derby database, or other issues occur that have to do with the infrastructure, you can find the information in /servers/managedServer1/logs directory.

Summary

When troubleshooting APIs that you have configured in Oracle API Platform cloud service you can use the following tools:
  • jwt.io Debugger. This tool lets you inspect OAuth tokens generated by a provider.
  • Oracle API Platform Cloud Service Analytics. Shows the type of error.
  • Oracle API Platform logging policies you put on the API. Lets you log the content of objects. 
  • Log files in the API Gateway node:
    • {domain}/apics/logs for the logs of the gateway node. Contains stracktraces etc
    • {domain}/apics/customlogs for any custom logs you entered in the api
    • {domain}/servers/managedServer1/trace for default.log of the managed server
 Happy coding!

Sunday, January 19, 2014

Tracking progress of your BPEL process using sensors

Often when you start using Oracle SOA Suite11g and BPEL you need a mechanism to help the end users keep track of the progress of the overall process instances. The EM shows this to administrators, but this is not suitable for end users.

What about the worklist application?

The SOA Suite offers the worklist application to handle tasks and to view progress using views and reports. It shows human tasks, crossing multiple process definitions. It does not show invocation of services or specific data changes in the process. The figure below shows three instances of one specific process definition. In this example relevant milestones are reached when the process starts, the first automated step is executed, the second human task is executed and when the process ends. In the worklist application you would be able to see what human tasks are open or executed by which user. With the worklist application you can't keep track of the first, second and last milestone, only of the third one because that is the only one associated with a human task. 


Sensors and sensor actions

The required functionality can be accomplished using sensors and sensor actions on relevant parts in the BPEL process. There are different types of sensors that can be used in SOA Suite 11g: activity sensors, variable sensors and fault sensors.

After you have configured the sensor in your BPEL process, you need to define one or more sensor actions. There are four types of actions: three sensor actions and one BAM sensor action:

  • Database to publish it to the BPEL dehydration store
  • JMS to publish it to a JMS topic or queue. If the JMS provider is not local, you can use the JMS adapter
  • Custom Java class to handle it in a different way. 
  • BAM Sensor actions to publish them to Oracle BAM.
You can chose to publish the sensor to AQ, so you can handle it both with Java clients and with PL/SQL because JMS queues in WebLogic can be implemented using AQ. 

Displaying the information with BAM or a custom application

There are two ways to show the progress to the end user: using BAM or building a custom application on top of the sensor data. You can easily decide to implement BAM later: you can either add BAM sensor actions to the BPEL sensor, or define a Enterprise Message Source in BAM. 



Option 2 (defining an Enterprise Message Source) has the advantage of not having to change the BPEL process (design time) when you decide to start using BAM rather than or on top of the custom application. 

Steps

Now that we have established how we want to implement the functionality, we can get started :) The following steps need to be taken:
  1. Define the relevant milestones you want to show to the user;
  2. Configure the sensors;  
  3. Configure the sensor actions;
  4. Create the database user and queues for AQ;
  5. Configure WebLogic Server and add JMS objects;
  6. Create PL/SQL code to read from the queue and store it in the table;
  7. Deploy the BPEL process;
In parallel you can build the GUI or configure BAM, depending on what you decided upon. 

Resources






Tuesday, December 24, 2013

WebLogic Hackathon Pics & Vids

Here are some pictures and videos from the UKOUG Tech 13 WebLogic Hackathon to wrap up my previous posts on this event: introduction and resources. The videos were organized and hosted by Bob Rhubart who is manager of the architect community at Oracle Technology Network.







Preparation of the hackathon








Participants pouring in




Presentations and introduction of the labs







Labs in progress








And the winners of the WebLogic Hackathon !



Sunday, December 22, 2013

Hands-on-lab material available for WebLogic provisioning using Puppet !!

Last UKOUG Tech 2013 conference we organized a Super Sunday WebLogic Hackathon session. Part of this session was a hands-on-lab in which attendees could automatically provision WebLogic Server using Puppet. Several people asked if the materials of this lab can be published so they can do the hands-on-lab themselves.

This blog contains links to the materials and a 5 step plan to get you up and running with the lab. Running and using the lab and VMs is of course at your own risk.

Last but not least, credit goes to Simon Haslam, Peter Lorenzen, and Guido Schmutz for organizing and assisting in the WebLogic Hackathon!

From R-L: Simon Haslam, Peter Lorenzen, Guido Schmutz, Ronald van Luttikhuizen

1. Setup

The setup for the lab is shown in the following figure. Read my previous blog for more info.


2. Introduction to middleware provisioning using Puppet and Chef

The following presentation introduces middleware provisioning using Chef and Puppet. It shows where middleware provisioning fits in the entire process of software delivery, and what benefits the automation of provisioning offers. Finally, the presentation introduces the Hands-on-Lab.


3. Download the VMs

The lab uses a separate VM that acts as Puppet Master and another VM that acts as Puppet Agent. When you run the Hands-on-Lab you should at least have the Puppet Master VM running. You can add an arbitrary number of Puppet Agent VMs. The VMs can be run using Oracle VM VirtualBox.

You can download the VMs from the following locations:

4. Follow the lab instructions & enjoy :-)

The following presentation contains a step-by-step tutorial for completing the lab.


5. Want to know more?

I strongly recommend the following book if you want to know more about provisioning and configuration management using Puppet:

Puppet 3 Beginner's Guide from Packt by John Arundel


Tuesday, November 26, 2013

UKOUG Tech 2013 WebLogic Hackathon - Server Provisioning using Puppet

Last week Peter Lorenzen blogged about the Super Sunday event at the upcoming UKOUG 2013 Tech conference. One of the streams for that day is the WebLogic Hackathon. This stream is presented by an international lineup consisting of Simon Haslam, Peter Lorenzen, Jacco Landlust, Guido Schmutz, and Ronald van Luttikhuizen.

Peter has prepared a lab where participants can perform a scripted installation and configuration of Oracle WebLogic Server 12c. I've prepared a follow-up lab in which we will do a similar installation and configuration, only this time fully automated using Puppet.

Puppet is a tool to automate configuration management. Together with Chef it's one of the more popular configuration management tools at the moment. Puppet allows you to describe the desired (to-be) state of your servers by declaring resources. These declarations can describe user accounts, security settings, packages, directories, files, executable statements, services, and so on. Manifests are the files in which resource declarations are listed. Puppet periodically applies manifests by translating manifests into specific commands (catalogs) and executes those on the managed servers. Puppet is capable of inspecting the machines so it only applies those changes that are necessary. If a machine is already in the desired state Puppet will apply no changes.

The following example shows a simple manifest that can be used to install and configure the Network Time Protocol (NTP) service. The manifest declares that the "ntp" package needs to be present, that the ntp configuration file is copied to the right location, and that the ntp service is running. A change in the configuration file will restart the service.

package { "ntp": 
   ensure  => present 
}

file { "/etc/ntp.conf":
   owner    => root,
   group    => root,
   mode     => 444,
   source   => "puppet:///files/etc/ntp.conf",
   require  => Package["ntp"],
}

service { "ntpd":
   enable     => true ,
   ensure     => running,
   subscribe  => File["/etc/ntp.conf"],
}

Using a configuration management tool to automate server management compared to manual installation and configuration (artisan server crafting) has the following benefits:

  • You eliminate tedious and repetitive work since you only need to write a manifest once and can apply it to as many servers as you want;
  • Puppet manifests are defined in a machine- and OS-independent domain language so the manifests are portable and can be reused;
  • You can keep servers in synch and you know what is running on what server;
  • Manifests can be used as documentation: since manifests are applied by Puppet the documentation is always up-to-date;
  • Manifests can be version controlled and managed the same way you manage other code.

Puppet can be configured to run in a Master/Agent mode, meaning that there is a central Puppet instance (Master) that coordinates the server management. Servers on which a Puppet Agent runs pull the catalog from the Master. The Puppet Master decides what goes into the catalogs. Participants of the WebLogic Hackathon event will be divided in groups of three in which one will act as Puppet Master and two act as Agent. This setup is shown in the following figure:



So, sign-up for the WebLogic Hackathon at UKOUG 2013 Tech and join us for this cool hands-on-lab !!!

If you want to know more about Oracle WebLogic Server please visit Oracle Technology Network. If you want to know more about Puppet I strongly recommend the Puppet 3 Beginner's Guide by John Arundel. Also see Edwin Biemond's Oracle Puppet modules on Puppet Forge.

Saturday, August 17, 2013

Developing geometry-based Web Services for WebLogic | Part 3

Part 1 of this blog gives an overview of the geometry-based Web Services we developed. Part 2 dives into some of the interesting details for the implementation of such Web Services. This part discusses the deployment specifics of such Web Services.


Deployment

There is a catch to deploying an application to WebLogic that uses the EclipseLink StructConverter to convert geographical data into JGeometry objects. When packaging the dependent JAR files sdoapi.jar en sdoutl.jar into the application (e.g. WAR or EAR file) you get the following error when running the application:

java.lang.NoClassDefFoundError: oracle/spatial/geometry/JGeometry at org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter.<clinit>(JGeometryConverter.java:33)

This issue is caused by classloading specifics of EclipseLink on Oracle WebLogic. You can resolve this by adding the sdoapi.jar and sdoutl.jar files to the system classpath of WebLogic: 


  • Copy the JAR files to the [WEBLOGIC_HOME]/wlserver_10.3/server/lib directory; 
  • Add these JAR files to the WEBLOGIC_CLASSPATH variable in the commEnv.cmd or commEnv.sh file located in [WEBLOGIC_HOME]/wlserver_10.3/common/bin;
  • Restart the WebLogic Managed Server after these steps.

Also see the following OTN forum post for the NoClassDefFoundError error.

We use Maven to build the Web Service. For EclipseLink you can add the following repository to the pom.xml file:

<repository>
 <id>EclipseLink</id>         <url>http://download.eclipse.org/rt/eclipselink/maven.repo</url>
</repository>

For the Oracle Java Spatial libraries you can add the following dependencies. You can place the associated JAR files into your local Maven repository:

<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>ojdbc6</artifactId>
  <version>11.2.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>sdoutl</artifactId>
  <version>11.2.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>sdoapi</artifactId>
  <version>11.2.0</version>
  <scope>compile</scope>
</dependency>


Summary

This blog series shows how to create geometry based Web Services that can be deployed on Oracle WebLogic Server. Oracle provides functionality in several of their products and frameworks to ease development of such applications. These include geographical data types in the Oracle Database and specific geographical converters in EclipseLink.

Developing geometry-based Web Services for WebLogic | Part 2

Part 1 of this blog gives an overview of an end-to-end example of a geometry-based Web Service. This part dives into some of the interesting details for the implementation of such Web Services. The details are discussed per component that was named in part 1.

Database schema

This one is pretty straightforward. When defining a table column that needs to store geographical data, you can use the MDSYS.SDO_GEOMETRY type. For example:

CREATE TABLE RESIDENCE(
  RESIDENCE_ID INT NOT NULL,
  NAME VARCHAR2(100) NOT NULL,
  GEOMETRY SDO_GEOMETRY,
  CONSTRAINT PK_RESIDENCE PRIMARY KEY (RESIDENCE_ID)
);

You can use the SDO_UTIL package to insert GML into SDO_GEOMETRY types using the SDO_UTIL.FROM_GMLGEOMETRY and SDO_UTIL.FROM_GML311GEOMETRY functions.

See the Oracle Spatial Developer's Guide for more information on the SDO_GEOMETRY type.

ORM layer

In the ORM layer we map database table rows to POJOs and vice versa using JPA. JPA implementations such as EclipseLink provide out-of-the-box mappings between most common Java data types and database columns. To map more exotic and user-defined Java objects you can use Converters in EclipseLink. You can either use an out-of-the-box converter that is shipped with EclipseLink, or code one yourself by implementing EclipseLink's Converter interface. For more information, see this blogpost by Doug Clarke.

In this case we need to map the SDO_GEOMETRY database objects to some sort of Java geometry object. Luckily, EclipseLink ships with an out-of-the-box Converter that maps the SDO_GEOMETRY type to a JGeometry object. The JGeometry class provides all kinds of convenience methods and attributes for working with geographical data. This class is part of the Oracle Spatial Java functionality. It can be used only for Oracle Spatial's SQL type MDSYS.SDO_GEOMETRY and supports Oracle JDBC Driver version 8.1.7 or higher.

To implement the mapping for geographical data we need to do the following:

  • Add the required JARs to the classpath; 
  • Annotate the JPA entities and attributes. 

The JGeometry class and associated Java classes are contained in the sdoapi.jar and sdoutl.jar files. They can be found in the library directory of your Oracle RDBMS installation. Also add the ojdbc JAR to the classpath.

Add a Convert annotation to the geometry attributes in your JPA entities that need to map to the SDO_GEOMETRY database types:

@Column
@Convert("JGeometry")
private JGeometry geometry;

Next, add the StructConverter annotation to the JPA entities containing geometry attributes. The StructConverter is a specific type of EclipseLink converter that provides out-of-the-box mappings to Oracle RDBMS struct types.

@Entity
@Table(name = "RESIDENCE")
@StructConverter(name = "JGeometry", converter = "org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter")
public class Residence implements Serializable

The org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter provides the actual mapping logic. The name attribute of the StructConverter needs to be same as the attribute value for the Convert annotation. 


Web Service layer

Since GML is an XML format we can use JAXB to generate Java classes for the GML elements that are part of the input and output values of the Web Service operations. There are several ways to generate the JAXB classes including Maven plugins for JAXB or the command-line tool xjc. A simple example of running xjc is shown by the following command:

xjc -d [target dir of generated classes] [XSD root directory] 

In our use case, we had a predefined Web Service interface and used a top-down approach to generate Java classes based on the existing interface. You can use the wsimport tool to generate the JAX-WS artifacts including the Java WebService class from the WSDL.

Note that in this end-to-end scenario the service is exposed as SOAP Web Service. It is simple to expose the same functionality as a RESTful service. You can use JAX-RS annotations instead of JAX-WS annotations to create a RESTful service that exposes geographical data in GML format. See the following example that shows how JPA and JAX-RS can be combined to create RESTful services.

Business logic layer

This layer, among others, provides the logic to map between the JAXB generated classes for the GML elements and the JGeometry objects.

For the conversion from JGeometry objects to JAXB generated classes for GML elements this involves:

  • Use the static methods of the oracle.spatial.util.GML3 class to generate a String containing the textual GML representation of the geographical object;
  • Unmarshall the GML String into the JAXB generated classes.

This is shown in the following code snippet:

JAXBContext jaxbContext = JAXBContext.newInstance("net.opengis.gml");
Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();
JAXBElement jaxbGeometry = null;
String gml = GML3.to_GML3Geometry(jGeometry);
ByteArrayInputStream bais = new ByteArrayInputStream(gml.getBytes());
JAXBElement jaxbGeometry = (JAXBElement) unmarshaller.unmarshal(bais);

The GML3 class and supporting code can also be found in the sdoapi.jar and sdoutl.jar files.

For the conversion from JAXB generated classes for GML elements to JGeometry objects you need to retrieve the geographical data from the GML elements and use the static methods of the JGeometry class to instantiate the proper JGeometry object. For example:

JGeometry geometry = JGeometry.createLinearPolygon(coordinates, spatialDimension, spatialReference);

Read about the deployment specifics for this geometry-based Web Service on Oracle WebLogic Server in part 3 of this blog.

Developing geometry-based Web Services for WebLogic | Part 1

In a recent project we developed Web Services that expose geographical data in their operations. This blog explains the use case for the service, gives an overview of the software architecture, and briefly discusses GML as markup language for geographical data. Part 2 of this blog provides pointers on the implementation of the service while part 3 discusses the deployment on Oracle WebLogic Server.

Use Case

The "BAG" (Basisregistratie Adressen en Gebouwen) is a Dutch national database containing information on all addresses and buildings in the Netherlands, and is maintained by Dutch municipalities. For several object types the BAG also maintains the associated geographical location and shape; for example for premises and cities.

Often organizations have their own GIS/GEO systems and databases for analysis and decision-making purposes. In this particular project we created Web Services to retrieve and modify the geographical data in these GIS/GEO systems based on updates from the national BAG database.

There are of course numerous use cases for geo-based services such as creating mashups with maps and viewers, and offering geo-based public APIs for your customers to integrate with.

Software Architecture

The following figure shows an overview of the software architecture for the Web Services we developed.

Overview of the software architecture for the Web Services 


These services consist of the following components:

  • Database schema containing the geographical, and related administrative data. The schema is located in an Oracle RDBMS 11g Enterprise Edition instance. The geometry is stored as SDO_GEOMETRY type; an Oracle spatial object type that provides methods for storing, accessing, and altering geographical attributes.
  • Object-Relational Mapping (ORM) layer that provides access to the geographical data by converting SDO_GEOMETRY database object types into JGeometry Java objects. The persistency layer is implemented using the JPA (Java Persistence API) standard and EclipseLink as persistency provider. JGeometry is a Java class provided by Oracle as part of the Oracle Spatial Java functionality.
  • Web Service layer that exposes the service and its operations to consumers using SOAP. The operations expose geographical elements as GML (Geography Markup Language) data types. GML is an XML grammar for expressing geographical features that is maintained by the Open Geospatial Consortium (OGC). The Web Service layer is implemented using JAX-WS and JAXB.
  • Business logic layer that contains Java logic for validation and transformation. As part of the transformation logic, this component converts GML elements into JGeometry objects and vice versa.

GML

The following XML snippet shows an example of a GML element. Such elements are part of the input and output of the Web Service operations. In this case, the element denotes a single polygon containing a hole. The polygon is defined by the exterior element, and the hole is defined by the interior element.

<gml:Polygon srsName="urn:ogc:def:crs:EPSG::28992" xmlns:gml="http://www.opengis.net/gml">
  <gml:exterior>
    <gml:LinearRing>
      <gml:posList srsDimension="2">82862.708 436122.616 ... (more coordinates) ... 82862.708 436122.616</gml:posList>
    </gml:LinearRing>
  </gml:exterior>
  <gml:interior>
    <gml:LinearRing>
      <gml:posList srsDimension="2">82832.967 436145.273 ... (more coordinates) ... 82832.967 436145.273</gml:posList>
    </gml:LinearRing>
  </gml:interior>
</gml:Polygon>

Note the following:

  • There are several versions of GML available. The accompanying XML Schemas for GML can be found on the OGC website.
  • GML doesn't define a default Coordinate Reference System (CRS) from which the absolute position of the defined GML elements can be determined. Instead you have to define what coordinate reference system should be used. This is done by specifying the srsName attribute. In the above example, the EPSG::28992 (European Petroleum Survey Group) reference system is used which has the Dutch city of Amersfoort as reference for GML elements. You can find several reference systems at http://spatialreference.org and http://www.epsg-registry.org
  • You need to define the dimension of the GML elements using the srsDimension attribute. In the above example all elements are two-dimensional.
  • GML defines several geographical types. In our use case, a residence or municipality is represented by a point, simple polygon, or multi surface. A multi surface can include multiple polygons, for example when defining two buildings that are not attached but together form one residence. The available geographical types for GML can be found on the OGC website as well.

Part 2 of this blog will dive into some interesting implementation aspects of the Web Service.

Friday, February 22, 2013

From the Trenches 2 | Patching OSB and SOA Suite to PS5

In my previous blog I talked about a recent upgrade from Oracle Fusion Middleware 11g PS2 to 11g PS5. This blog continues with two post-installation steps that were required to complete the upgrade.

Patching Oracle B2B for ebMS-based services

In this particular project we implemented ebMS-based services using Oracle B2B and Oracle Service Bus 11g. Besides "plain" SOAP Web Services, ebMS is part of the Dutch government standards to exchange messages and expose services between organizations in the public sector. You can read more about implementing ebMS-based services using Oracle B2B, Oracle Service Bus and Oracle SOA Suite in this presentation.

The ebMS standard makes extensive use of SOAP headers to facilitate features such as guaranteed delivery and to avoid duplicate messages. The following snippet shows part of an ebMS message header.


One of the identifiers used in the ebMS message exchange is the manifest id. According to the ebMS specification maintained by OASIS this needs to be XML NCName type. This type has some restrictions; for example that its values cannot start with a number. In Oracle B2B 11g PS2 the manifest id value is prefixed with the text oracle. This prefix is removed in 11g PS5 resulting in the following error from the B2B trading partner at runtime:

09:06:00 Local Listener(8914) Result: "Error" "Fault sent:<SOAP:Fault><faultcode>SOAP:Client</faultcode><faultstring>org.xml.sax.SAXParseException: cvc-datatype-valid.1.2.1: '0A1A1A1A1A1A1A1A1A1A1A1A1A1A1A1A' is not a valid value for 'NCName'.</faultstring></SOAP:Fault>"

Luckily there is an easy-to-apply patch that solves this problem; see article 1497168.1 on Oracle Support. After applying the patch, the manifest id is prefixed with a text value again.

Changing namespaces in WSDLs and XSDs of JAX-WS Web Services

The environment that was patched contains several Java applications running on WebLogic Server. These applications expose Web Services using JAX-WS. A meet-in-the-middle approach was used to create them: the business logic implemented in Stateless Session Beans and JPA (EclipseLink) is integrated with the Java classes generated from the predesigned WSDLs and XSDs.

Depending on the developer that created the Web Service, deployment descriptors such as webservices.xml and weblogic-webservices.xml were added to the application. Descriptors are used for configuration, overriding default settings, and adding metadata. For Web Services this can be the endpoint, port configuration, linkage of the Web Service to EJB components, and so on. When deployed, the WSDL location of Web Services is listed in the WebLogic Console and the WSDL can be retrieved at runtime.

After the patch we noticed that these artifacts weren't identical to the original WSDLs and XSDs. More specifically, the namespaces of the XSD elements in the request and response message definitions were changed to the namespace of the service itself. At runtime however, the service accepted requests and responses as defined by the original contract. This makes it difficult to test these Web Services using clients that inspect the runtime WSDL; for example when creating a default project in soapUI.

This issue was resolved by removing the webservices.xml and weblogic-webservices.xml deployment descriptors from the Java archive and redeploying the Web Services to WebLogic Server. The WSDL that can be retrieved at runtime matches the original designed WSDL again.

Saturday, November 24, 2012

Eventing Hello World

This week I presented the "Introduction in Eventing in Oracle SOA Suite 11g" session at the DOAG conference in Nürnberg. I used several demos in this session to show the eventing capabilities of SOA Suite. This blog contains the source code and accompanying explanation so you can replay the demo yourself.

Introduction
An event is the occurrence of something relevant, signals a change in state that might require an action. Examples of events are: an invoice that has been paid, a customer that moved to a new address, a new purchase order, and so on. Events are complimentary to processes and services: processes and services describe what should be done, events about when something important occurs. SOA is not only about (synchronous) services and processes (what); but also about events (when). Eventing improves decoupling in your SOA landscape.

Slides
The presentation slides can be viewed and downloaded from my SlideShare account. The presentation introduces the concept of eventing, discusses some basic eventing patterns, and talks about the implementation of eventing in SOA Suite using AQ, JMS, and EDN.




Code
You can download a zipfile containing the JDeveloper workspaces for the SOA Composites (DOAG2012_Eventing_SOASuite) and Java projects (DOAG2012_Eventing_Java) used in the demo here.

Prerequisites
You'll need an Oracle SOA Suite 11g runtime to deploy and run the SOA Composites. Oracle provides a pre-built SOA and BPM virtual machine for testing purposes that is easy to install.

Download JDeveloper 11g (11.1.1.6) or later from Oracle Technology Network to inspect and modify the SOA Composite and Java projects.

Setup
In order for the demos to run some plumbing needs to be done for AQ and JMS:

  • The configuration steps required for the AQ demos are listed in the aq_configuration.txt file that is part of the Queuing_Utilities JDeveloper project.
  • The configuration steps required for the JMS demos are listed in the jms_configuration.txt file that is part of the Queuing_Utilities JDeveloper project.
  • In JDeveloper modify the existing Database Connection to point to your DOAG12_JMSUSER database schema.
  • In JDeveloper modify the GenerateInvoice File Adapter of the Billing SOA Composite to write the invoice file to an existing directory of the SOA Suite runtime.
  • In JDeveloper modify the WriteSensorToFile File Adapter of the ProcessSensors SOA Composite to write sensor values to an existing directory of the SOA Suite runtime.
  • Use Enterprise Manager to create a partition in SOA Suite called doag2012_eventing.
  • Deploy the following SOA Composites to the doag2012_eventing partition of the SOA Suite runtime using JDeveloper, Enterprise Manager or scripts: Order2Cash, Dunning, Billing, and ProcessSensors.
  • Deploy the Java application CRM to the WebLogic Server.

Advanced Queuing (AQ)
The first demo shows the queuing capabilities of AQ and shows how to integrate SOA Composites with AQ queues:

  1. Insert a record in the ORDERS table in the DOAG12_WEBSHOP schema. 
  2. The BIT_ENQUEUE_NEW_ORDER_EVENT trigger will execute the ENQUEUE_NEW_ORDER_EVENT procedure that enqueues a message to the NEW_ORDER_QUEUE
  3. You should see a new event in the NEW_ORDER_QT queue table of the DOAG12_JMSUSER schema.
  4. If the Order2Cash SOA Composite is deployed you should see a new instance in Enterprise Manager. This process is started by receiving events from the NEW_ORDER_QT queue using an AQ Resource Adapter.

Java Message Service (JMS)
The next demo shows the publish/subscribe capabilities of JMS and shows how to integrate SOA Composites with JMS:

  1. Complete the Book Order Human Task of the Order2Cash process instance that was started in the previous step.
  2. The Order2Cash instance will use a JMS Adapter to publish an event to the JMS topic DOAG12_BillingTopic. You can inspect the topic in WebLogic Console and see that an event has been published. There are two subscribers to the JMS topic: the Billing SOA Composite (using a JMS Adapter) and the CRM Java application (using a Message-Driven Bean or MDB).
  3. In Enterprise Manager you should see a new instance of the Billing SOA Composite that is started through the JMS Adapter. The instance should have written an invoice file using the File Adapter.
  4. In the SOA Suite log file (e.g. user_projects/domains/[domain]/servers/[server]/logs) you should see a log statement produced by the CRM application that indicates that the event is received.

Event Delivery Network (EDN)
This demo shows the publish/subscribe capabilities of EDN and shows how to use EDN from SOA Composites:

  1. Wait for 5 minutes after the billing event has been published from the Order2Cash process instance using JMS. The OnAlarm branch will be activated and publish a DunningEvent using a Mediator component.
  2. The Dunning SOA Composite is subscribed to the DunningEvent, a new instance of this SOA Composite will be started and can be inspected from Enterprise Manager.
A second usage of EDN is demonstrated using the following steps:
  1. Start a new instance of the Order2Cash process by inserting a new order in the database. Complete the Human Task again. Only this time use Enterprise Manager to fire a PaymentEvent to EDN. This can be done by right-clicking the soa-infra node and selecting Business Events. Select the PaymentEvent and hit the Test button. You can use the PaymentExample.xml file from the Order2Cash JDeveloper project as example event payload. 
  2. Using correlation the running Order2Cash process instance will receive an in-flight event and will continue without starting the Dunning process.

Sensors and Composite Sensors
The last demo shows the sensor and composite sensor capabilities of SOA Suite. Composite sensors are used to add metadata to running instances so they are easier to find in Enterprise Manager or via the Java APIs of SOA Suite:
  1. Inspect the composite sensors of the Order2Cash SOA Composite in JDeveloper.
  2. In Enterprise Manager navigate to the instances of the Order2Cash SOA Composite. Use the Add Fields button to search for instances based on functional data such as customer name and order id. 
Alternatively, composite sensors can also be published using JMS.

Sensors and monitoring objects can be used to feed runtime data of BPEL component instances into Oracle BAM, or publish this data to JMS, Database, or AQ in a non-intrusive way. Sensors are based on configuration instead of adding additional activities and components to your SOA Composites:
  1. Inspect the sensor configuration of the Dunning BPEL component in JDeveloper. These sensors publish to the DOAG12_SensorQueue queue.
  2. In Enterprise Manager navigate to the instances of the ProcessSensors SOA Composite. You should see new instances that are started based on the published sensor events.

Conclusion
The presentation and the demos in this blog give you an overview of the eventing capabilities of SOA Suite. The demo is stuffed with different eventing techniques (AQ, JMS, EDN), this should not be considered a best-practice ;-) Analyze what the best eventing implementation is for your specific scenario.

Some best-practices:
  • Model events in your BPM and SOA projects;
  • Use events to notify running processes;
  • Expand the service registry to include events;
  • Use events for additional decoupling between processes and services;
  • Events is not just queuing, also consider publish/subscribe and event stream processing;
  • There is not a single best technology for eventing implementation in SOA Suite.

Thursday, September 13, 2012

Managing EclipseLink using JMX


EclipseLink is an open-source persistency framework for mapping Java objects to relational data and vice versa which is called Object-Relational Mapping (ORM). Besides implementing the Java Persistence API (JPA) standard, it also provides capabilities for Object-XML Mapping (OXM) and the creation of Database Web Services.

The JPA standard specifies caching functionality. Caching improves performance since data that is already queried and present in the cache can be fetched from memory instead of executing all queries in the backend database. However, this means that data should also be modified through the same cache managed by the JPA provider. If not, the cache won't be aware of changes and becomes out-of-synch with the actual data stored in the backend. This results in incorrect data being returned to the clients of the EclipseLink-based application. A typical situation in which this occurs is when IT operations directly changes data in the database on behalf of users or when solving issues and thereby bypassing the cache.

There are a few approaches to deal with such situations:

  • Restart the application server or application after the change to clear the cache;
  • Disable caching in the application altogether;
  • Expand the application (user interface, persistency layer) so that IT operations has dedicated functionality to modify data through the application instead of being forced to modify data in the backend database;
  • Implement a mechanism to automatically invalidate the EclipseLink cache triggered by events from the database;
  • Configure the persistency layer so that entities in the cache are automatically invalidated on a regular interval making sure that data can only be incorrect or incomplete for a certain time-period.
The above approaches have drawbacks such as additional coding, negative impact on performance when caching is disabled, or increased downtime of the application.

This blog describes a way to manage EclipseLink caches that we use at customer projects to help IT operations and application managers and doesn't have the disadvantages named earlier. It uses:

  • the out-of-the-box support by EclipseLink for Java Management Extensions (JMX); 
  • the MBeans provided by WebLogic Server; 
  • the monitoring application JConsole that is JMX-compliant and shipped with the Java Development Kit (JDK). 
Using JConsole, EclipseLink caches can be monitored, managed, and invalidated at runtime without impact on running applications.

One-time configuration of WebLogic to enable remote JMX connections

Weblogic Server needs to be configured to allow for remote JMX connections that are set up from JConsole and to enable the invocation of MBeans. The following blogs show you how to accomplish this:


Some pointers here:

  • Make sure you add the Java options eclipselink.register.dev.mbean=true and eclipselink.register.run.mbean=true to the start script of the WebLogic Server on which the EclipseLink applications run. Adding these parameters makes sure that the EclipseLink MBeans show up in JConsole.
  • Restart the WebLogic Server after the configuration changes.

Using JConsole

After configuring and restarting WebLogic Server, the default WebLogic server logfile should include the endpoints for the JMX servers to which we connect using JConsole. For example:


<JMX Connector Server started at service:jmx:iiop://localhost:7001/jndi/weblogic.management.mbeanservers.runtime .> 
<JMX Connector Server started at service:jmx:iiop://localhost:7001/jndi/weblogic.management.mbeanservers.edit .> 
<JMX Connector Server started at service:jmx:iiop://localhost:7001/jndi/weblogic.management.mbeanservers.domainruntime .>


After setting the JAVA_HOME and WEBLOGIC_HOME environment variables as indicated in the blogs, you can now start JConsole using the command line:

jconsole -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%JAVA_HOME%\lib\tools.jar;%WL_HOME%\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dcom.sun.management.jmxremote


Enter the credentials you provided in WebLogic Server for the IIOP protocol and enter the JMX endpoint for the mbeanservers.runtime to connect to WebLogi Server. JConsole now lets you inspect and manage all sorts of aspects such as memory, threads, classes, and so on.

To view the currently active EclipseLink sessions click on the "MBeans" tab and expand the node "TopLink". Every active EclipseLink session is shown as separate subnode. Expand a session and select "Attributes" to inspect the EclipseLink session. Clicking on the "NumberOfObjectsInAllIdentityMaps" shows a graph displaying the number of cached entities.


You can now clear the cache of an EclipseLink session at runtime by executing the operation "invalidateAllIdentityMaps" on the "Operations" page. This operation can be used to clear the cache after someone, e.g. IT operations, modifies the backend data directly and bypasses the EclipseLink persistency layer.

Note (thanks to Shaun Smith): You should be calling invalidateAllIdentityMaps() which has replaced initializeAllIdentityMaps(). Invalidation ensures that object identity is not lost in a running transaction. Initialization does not and should only be used safely in single threaded dev/test.


By using this approach, IT operations and application managers can do their job that includes direct data modifications to solve urgent issues and user requests, and be able to refresh the EclipseLink cache without downtime and impact for end users.