Tuesday, December 24, 2013

WebLogic Hackathon Pics & Vids

Here are some pictures and videos from the UKOUG Tech 13 WebLogic Hackathon to wrap up my previous posts on this event: introduction and resources. The videos were organized and hosted by Bob Rhubart who is manager of the architect community at Oracle Technology Network.







Preparation of the hackathon








Participants pouring in




Presentations and introduction of the labs







Labs in progress








And the winners of the WebLogic Hackathon !



Sunday, December 22, 2013

Hands-on-lab material available for WebLogic provisioning using Puppet !!

Last UKOUG Tech 2013 conference we organized a Super Sunday WebLogic Hackathon session. Part of this session was a hands-on-lab in which attendees could automatically provision WebLogic Server using Puppet. Several people asked if the materials of this lab can be published so they can do the hands-on-lab themselves.

This blog contains links to the materials and a 5 step plan to get you up and running with the lab. Running and using the lab and VMs is of course at your own risk.

Last but not least, credit goes to Simon Haslam, Peter Lorenzen, and Guido Schmutz for organizing and assisting in the WebLogic Hackathon!

From R-L: Simon Haslam, Peter Lorenzen, Guido Schmutz, Ronald van Luttikhuizen

1. Setup

The setup for the lab is shown in the following figure. Read my previous blog for more info.


2. Introduction to middleware provisioning using Puppet and Chef

The following presentation introduces middleware provisioning using Chef and Puppet. It shows where middleware provisioning fits in the entire process of software delivery, and what benefits the automation of provisioning offers. Finally, the presentation introduces the Hands-on-Lab.


3. Download the VMs

The lab uses a separate VM that acts as Puppet Master and another VM that acts as Puppet Agent. When you run the Hands-on-Lab you should at least have the Puppet Master VM running. You can add an arbitrary number of Puppet Agent VMs. The VMs can be run using Oracle VM VirtualBox.

You can download the VMs from the following locations:

4. Follow the lab instructions & enjoy :-)

The following presentation contains a step-by-step tutorial for completing the lab.


5. Want to know more?

I strongly recommend the following book if you want to know more about provisioning and configuration management using Puppet:

Puppet 3 Beginner's Guide from Packt by John Arundel


Tuesday, November 26, 2013

UKOUG Tech 2013 WebLogic Hackathon - Server Provisioning using Puppet

Last week Peter Lorenzen blogged about the Super Sunday event at the upcoming UKOUG 2013 Tech conference. One of the streams for that day is the WebLogic Hackathon. This stream is presented by an international lineup consisting of Simon Haslam, Peter Lorenzen, Jacco Landlust, Guido Schmutz, and Ronald van Luttikhuizen.

Peter has prepared a lab where participants can perform a scripted installation and configuration of Oracle WebLogic Server 12c. I've prepared a follow-up lab in which we will do a similar installation and configuration, only this time fully automated using Puppet.

Puppet is a tool to automate configuration management. Together with Chef it's one of the more popular configuration management tools at the moment. Puppet allows you to describe the desired (to-be) state of your servers by declaring resources. These declarations can describe user accounts, security settings, packages, directories, files, executable statements, services, and so on. Manifests are the files in which resource declarations are listed. Puppet periodically applies manifests by translating manifests into specific commands (catalogs) and executes those on the managed servers. Puppet is capable of inspecting the machines so it only applies those changes that are necessary. If a machine is already in the desired state Puppet will apply no changes.

The following example shows a simple manifest that can be used to install and configure the Network Time Protocol (NTP) service. The manifest declares that the "ntp" package needs to be present, that the ntp configuration file is copied to the right location, and that the ntp service is running. A change in the configuration file will restart the service.

package { "ntp": 
   ensure  => present 
}

file { "/etc/ntp.conf":
   owner    => root,
   group    => root,
   mode     => 444,
   source   => "puppet:///files/etc/ntp.conf",
   require  => Package["ntp"],
}

service { "ntpd":
   enable     => true ,
   ensure     => running,
   subscribe  => File["/etc/ntp.conf"],
}

Using a configuration management tool to automate server management compared to manual installation and configuration (artisan server crafting) has the following benefits:

  • You eliminate tedious and repetitive work since you only need to write a manifest once and can apply it to as many servers as you want;
  • Puppet manifests are defined in a machine- and OS-independent domain language so the manifests are portable and can be reused;
  • You can keep servers in synch and you know what is running on what server;
  • Manifests can be used as documentation: since manifests are applied by Puppet the documentation is always up-to-date;
  • Manifests can be version controlled and managed the same way you manage other code.

Puppet can be configured to run in a Master/Agent mode, meaning that there is a central Puppet instance (Master) that coordinates the server management. Servers on which a Puppet Agent runs pull the catalog from the Master. The Puppet Master decides what goes into the catalogs. Participants of the WebLogic Hackathon event will be divided in groups of three in which one will act as Puppet Master and two act as Agent. This setup is shown in the following figure:



So, sign-up for the WebLogic Hackathon at UKOUG 2013 Tech and join us for this cool hands-on-lab !!!

If you want to know more about Oracle WebLogic Server please visit Oracle Technology Network. If you want to know more about Puppet I strongly recommend the Puppet 3 Beginner's Guide by John Arundel. Also see Edwin Biemond's Oracle Puppet modules on Puppet Forge.

Tuesday, October 22, 2013

Nordic OTN Tour 2013

This year I am part of the team that is presenting at the Nordic OTN Tour 2013. It covers three countries: Sweden, Denmark and Norway and is organized by the local user groups. Tim Hall, Mike Dietrich, Sten Vesterli and me are presenting on Database and Middleware in all countries.

Sweden, October 22nd

Today we presented in Stockholm. The program can be found on their website.

It was an interesting day, both from a Middleware perspective and from a Database perspective. The user group decided to plan two parallel tracks; the sessions about Middleware in the morning and Database sessions in the afternoon. Because of this, the Middleware sessions were competing with each other and the Database sessions were competing with each other. From that perspective it would have been nicer to have a Middleware track and a Database track running in parallel. The advantage of the Orcan approach however, is that Database people will attend the Middleware sessions that they might otherwise have skipped.




Denmark, October 23rd

The programs are not exactly the same in all three countries. In Denmark there are three parallel tracks, and instead of talking about using Oracle Fusion Middleware 11g to realize a SOA, I will talk about Oracle BPM Suite. The program can be found on the website of the Danish user group. Apart from the people that were at the Swedish day, there are a number of other speakers in Copenhagen:

  • Rasmus Knappe
  • Jørgen Christian Olsen
  • Gordon Flemming
  • Lars Bo Vanting 



Norway, October 24th

The last day is in Oslo. In Norway there are two parallel tracks, like in Sweden. I will do the same presentations: Overview of Oracle SOA Suite 11g and Creating SOA with Oracle Fusion Middleware 11g.

The same team is doing the presentations. In addition there are a few Norwegian speakers as well:

  • Trond Brenna
  • Harald Eri and Bjørn Dag Johansen
The agenda is published here

All in all a very interesting tour, I look forward to meeting the different user groups and spend time with the other ACE Director and talk about Oracle stuff all day ;)


Wednesday, September 25, 2013

OpenWorld and JavaOne 2013 so far! - Part II

In part I of this blog you read about the numbers of OpenWorld and our activities at OpenWorld and JavaOne. So what about the news? Almost all new products and further development of existing products are centered around the support of multi-tenancy, cloud-based computing, and the processing and analyzing of data and events from the billions of devices connected to the Internet (Internet of Things). Furthermore the traction around User Experience keeps growing.


Some concrete news that showcases this:

Oracle announced its in-memory database feature for 12c. This feature speeds up both OLTP and OLAP by storing and accessing data both in row and column format. Enabling the in-memory option for tables and partitions seems pretty simple based on the keynote demo. The in-memory option is transparent and works out-of-the-box: application code doesn't need to be rewritten, and features such as offered by RAC remain intact. Together with the in-memory database, Larry announced the "Big Memory Machine": a new box called M6-32 with lots of cores and RAM aimed at running in-memory databases. Finally, the Oracle Database Backup Logging Recovery Appliance was launched. An appliance specifically designed for protecting databases.

There was an update on the developments in the Fusion Middleware product stack. Service Bus, SOA Suite, BPM Suite, and so on are enhanced for mobile computing and cloud integration. Among others by providing better support for REST and JSON in SOA Suite, new Cloud Adapters to integrate with 3rd parties such as Salesforce, and using ADF Mobile to build code once and run it one various mobile platforms. WebLogic 12c supports Web Sockets and better integration with the multi-tenant Oracle Database 12c. The BPM Suite is shipped with prepackaged processes (accelerators) that can be used as blueprints for several processes. BPM Suite is also extended to support case management.

Coherence GoldenGate HotCache was presented. When using caching there is always the risk of stale data when data is modified outside the caching layer; e.g. directly in the datasource. A popular caching solution is Coherence. The new Coherence GoldenGate HotCache feature enables you to push data updates from the database to the Coherence layer instead of invalidating the cache and/or periodically pulling the data from the caching layer.

There was also news at JavaOne. Java ME and Java SE are being converged in both terms of language and APIs in JDK 8 and further on. This means that developers can use common tooling, that code compatibility increases, and that only one skillset is needed. This will make it easier for Java developers to create code for the "Internet of Things". Both Java SE 8 and ME 8 are available as Early Access downloads. The JDK 8 is scheduled to be released somewhere in the Spring of 2014. Important new features include Lambda expressions (JSR 335), Nashorn (JavaScript engine), Compact Profiles, Date and Time API (JSR 310), and several security enhancement. Lambda expressions are one of the biggest changes in the history of the Java language making it easier to write clean code and avoid boilerplate.

This summer the Java EE 7 spec has been released. Java EE 7 is aimed at providing better developer productivity and to offer better support for HTML 5 through Web Sockets, JSON support, and REST support. Other new features are batch APIs and support for the new, and simplified JMS standard. GlassFish 4, the reference implementation for Java EE 7, is available for download. Project Avatar has been open-sourced, a JavaScript services layer focused on building data services.

The Tuesday keynote was all about the Cloud. Oracle adds 10 new services to the Oracle Cloud including the Compute Cloud and Documents Cloud. Oracle and Microsoft announced a partnership that makes it possible to run Oracle Databases and Oracle WebLogic Servers on Oracle Linux on the Microsoft Azure platform.

This is only the tip of the iceberg. Visit the OpenWorld site to watch all the keynotes.

Tuesday, September 24, 2013

OpenWorld and JavaOne 2013 so far! - Part I

It's that time of the year when we are immersed in the JavaOne and Oracle OpenWorld conference in San Francisco. A conference that always leaves a bit of a shock-and-awe effect due to its enormity. In these blogs you can read about our adventures at OpenWorld and the news so far.

Let's talk numbers first. This year there are even more attendees than last years, approximately sixty thousand (of which 0,0033% work for Vennster). There are 2.555 sessions (of which 0,078% are presented by Vennster, the numbers are getting better :-). The attendees come from 145 countries and various continents: 69% from North America, 21% from EMEA, 6% from Japan and Asia Pacific, and 4% from South America.


Activities

Friends and family members sometimes think that OpenWorld means sitting in a park with a cocktail while watching the America's Cup and touring the Golden Gate Bridge. In reality our stay at OpenWorld is jam-packed with lots of events besides attending presentations, going to the demogrounds and discuss the latest products and features, and going to the hands-on-labs to keep up to date with all of the latest developments:

  • ACE Director briefing: 2 days filled with the latest updates by product management at Oracle HQ
  • Attending Partner Advisory Councils for SOA & BPM, and WebLogic & Cloud Application Framework (CAF)
  • Meeting old friends and new and interesting people at the OTN Lounge and ACE dinner
  • Participating in OTN vidcasts and podcasts
  • Social day and reception with the SOA, BPM, and WebLogic community and Oracle product management hosted by Jürgen Kress
  • Meetups such as the BeNeLux (Belgium, Netherlands and Luxembourg) architects meetup
  • Meetings with customers and partners
  • Present two sessions on UX design patterns and a live development session for Fusion Middleware


The upcoming blog will talk about some of the exciting news here at OpenWorld and JavaOne.

Sunday, September 22, 2013

Case Management - Part 1

I have been using BPMN in projects for a while now. It is very useful in describing the predefined sequence of actions that need to be taken in a specific process.

With BPMN 2.0 we have the option to support the process with IT using a Business Management Process System (BPMS) without having to rewrite the process definition. Oracle BPM Suite is an example of such a BPMS. You take the BPMN 2.0 process definition and implement the activities and events using the Oracle SOA Suite. This is a very powerful concept and works well in certain situations. However, not every process actually follows a limited, predetermined sequence of events. A lot of process management or workflow projects fail because the BPM approach is too restrictive or because the system does not support all the possible scenario's that apply in reality.

BPMN to model non-deterministic processes

So how do we model processes or actions that don't follow a predefined sequence? BPMN offers ad-hoc sub processes. These can be used to depict a number of activities that can be executed in any order any number of times until a certain condition is met. The notation for this is depicted below.
ad hoc sub process
However, none of the BPMS systems I use (Oracle BPM Studio, Activiti) support this construct. You can see this in the screen shot from Oracle BPM below. Sub processes are supported, Event Sub Processes are supported, the creation of ad-hoc tasks by a user or the system at runtime is supported, but Ad-hoc sub processes as defined in BPMN 2.0 are not.

Activities supported by Oracle BPM
There are workarounds for that obviously. You can model a process that offers a choice (gateway) every time a new step is taken and loops around until a certain condition is met. This results in very complex process models that are hard to read and understand and therefore 'breaks' the idea of having a BPMN 2.0 process that is used both by the business as documentation and by IT for implementing it directly in your BPMS. Besides, it does not take into account the fact that you want to be able to add new activities on the fly, during the execution of your process.

Using Case Management Model and Notation 

The OMG defined a new specification: the case management model and notation or CMMN that can be used to model non-deterministic processes. It is currently in beta. A case is defined as: "(...) a proceeding that involves actions taken regarding a subject in a particular situation to achieve a desired outcome." A case has actions, just like a process. The difference is that you don't have to know in advance what the sequence of these actions is, it can be completely ad-hoc. But as the specification continues: "(...) As experience grows in resolving similar cases over time, a set of common practices and responses can be defined for managing cases in more rigorous and repeatable manner." Hence the term Case Management.

The specification describes the following:

  1. Case Management Elements. A semantic description of elements like Case Model Elements, Information Model elements and Plan Model Elements. 
  2. Notation. This describes the notation of elements like Case, Case Plan Models, Case File Items, Stages, Tasks, Milestones, etc. 
  3. Execution semantics that specify the lifecycle of a Case instance, Case File Items, Case Plan Items, Behavior property rules etc. 
Below is an example model of the case "Write Document".  


Now let's compare this model to BPMN. 
  • The model does not show roles, even though the specification states that a case has roles associated with it. In BPMN there is the concept of swim lanes to depict roles. We lose that information when modelling with CMMN. 
  • In BPMN we show messages to model interaction with other 'process pools'. This gives a good overview of dependencies on the outside world. In CMMN we can define a task that calls another case or another process and there are event listeners in the model. The origin of the events are Case File Items, User tasks or timer events. There is no way to depict interaction with the outside world.
  • In BPMN you can model different gateways, including parallel gateways and exclusive or gateways. In CMMN you can not model the fact that stages can occur in parallel in a case. 
  • In BPMN you can not specify that ad-hoc tasks may be defined at runtime. The same is true for this model. You can model a task as a discretionary task. But there is no way to model the fact that certain users may define a task at runtime in a certain stage. Or to model that they may not do that.
  • In BPMN you can't model the content of criteria, or expressions for a certain condition. As you can see that is true for CMMN as well. There is no way to model the rules that determine when a certain criterion is met or when a certain milestone is reached. This has to do with the fact that the information (data) that is part of the case is not modelled. 
  • In BPMN you model data items. In CMMN you model Case File Items. This is a similar concept.
  • In BPMN you model activities, or tasks. In CMMN you model tasks. 
  • In BPMN you model a sequence of activities. In CMMN you model entry and exit conditions and events that cause the conditions to be met. This reflects the fact that BPMN is about a predefined sequence of activities and a case is about a set of actions that need to be executed to reach a certain end result. Unfortunately the desired end result is not modelled. 
Looking at the specification, all the relevant information is defined in the semantic part of the specification, including the execution semantics. The model is not as rich (yet?) and seems to be lacking crucial information. I guess the only way to found out how bad that is, is to start using it...

Next steps

In part 2 I will model a case (applying for a permit) using different notations. Part 3 will show how case management is supported in Oracle BPM Suite using the case component.





Monday, September 9, 2013

Oracle OpenWorld 2013 Preview Sessions

It's becoming a tradition, the Oracle OpenWorld preview event hosted by AMIS. An event for those that cannot attend OpenWorld but are interested in the sessions presented by Dutch and Belgium speakers there. Or for those that want to hear the presentations twice of course ;-)

Like previous years, Vennster was present at the preview event with a couple of presentations. For those who visit OpenWorld, I've included the location and time:

  • [UGF9898] Oracle on Your Browser or Phone: Design Patterns for Web and Mobile Oracle ADF Applications by Lonneke Dikmans and Floyd Teter (Sunday, Sep 22, 9:15 AM - 10:15 AM - Moscone West - 2003), and
  • [CON6031] An Oracle ACE Production: Oracle Fusion Middleware Live Development Demo  by Lonneke Dikmans, Lucas Jellema, Simon Haslam, and Ronald van Luttikhuizen.

The Live Development session is a unique session in which the presenters build and/or demo an application from scratch (depending on the available time and beamers :-) using Oracle Fusion Middleware products such as Oracle SOA Suite, BPM Suite, ADF, and JDeveloper.

The session description says it all:

"Many articles and presentations discuss various parts of Oracle Fusion Middleware such as Oracle Application Development Framework (Oracle ADF), Oracle SOA Suite, Oracle WebLogic, or Oracle Service Bus. Usually they do so in an isolated fashion. In this very special panel session, attendees will see at close range how it all comes together and what steps are necessary to create an end-to-end Oracle Fusion Middleware application. A team of brave administrators and developers shows how to develop different parts of an end-to-end Oracle Fusion Middleware application. Their work is monitored live while a moderator solicits audience questions. The team consists of Oracle Fusion Middleware specialists from the Oracle ACE Director program."

Lonneke, Lucas, and Ronald at work during the Live Development preview session (thanks to Jacco Landlust for the picture)

The slides for this session are available on Slideshare:


Next was Lonneke with her presentation on mobile and web patterns in ADF. Attend this presentation if you want to know why and how to engage your end-users in the design and development of (enterprise) applications. Learn how the available patterns from the Applications UX program help you build better and more useable applications.
"For those who are building Oracle software for browsers or mobile phones, Oracle provides a rich set of Oracle Application Development Framework (Oracle ADF) design patterns that provide guidance on industry standards for creating rich, usable applications. Come to this session to hear two leading Oracle experts discuss how to make use of these patterns to ensure that you build applications that not only are useful for your users but also make them productive and satisfied."
Hope to see you at OpenWorld !

Saturday, August 17, 2013

Developing geometry-based Web Services for WebLogic | Part 3

Part 1 of this blog gives an overview of the geometry-based Web Services we developed. Part 2 dives into some of the interesting details for the implementation of such Web Services. This part discusses the deployment specifics of such Web Services.


Deployment

There is a catch to deploying an application to WebLogic that uses the EclipseLink StructConverter to convert geographical data into JGeometry objects. When packaging the dependent JAR files sdoapi.jar en sdoutl.jar into the application (e.g. WAR or EAR file) you get the following error when running the application:

java.lang.NoClassDefFoundError: oracle/spatial/geometry/JGeometry at org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter.<clinit>(JGeometryConverter.java:33)

This issue is caused by classloading specifics of EclipseLink on Oracle WebLogic. You can resolve this by adding the sdoapi.jar and sdoutl.jar files to the system classpath of WebLogic: 


  • Copy the JAR files to the [WEBLOGIC_HOME]/wlserver_10.3/server/lib directory; 
  • Add these JAR files to the WEBLOGIC_CLASSPATH variable in the commEnv.cmd or commEnv.sh file located in [WEBLOGIC_HOME]/wlserver_10.3/common/bin;
  • Restart the WebLogic Managed Server after these steps.

Also see the following OTN forum post for the NoClassDefFoundError error.

We use Maven to build the Web Service. For EclipseLink you can add the following repository to the pom.xml file:

<repository>
 <id>EclipseLink</id>         <url>http://download.eclipse.org/rt/eclipselink/maven.repo</url>
</repository>

For the Oracle Java Spatial libraries you can add the following dependencies. You can place the associated JAR files into your local Maven repository:

<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>ojdbc6</artifactId>
  <version>11.2.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>sdoutl</artifactId>
  <version>11.2.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>sdoapi</artifactId>
  <version>11.2.0</version>
  <scope>compile</scope>
</dependency>


Summary

This blog series shows how to create geometry based Web Services that can be deployed on Oracle WebLogic Server. Oracle provides functionality in several of their products and frameworks to ease development of such applications. These include geographical data types in the Oracle Database and specific geographical converters in EclipseLink.

Developing geometry-based Web Services for WebLogic | Part 2

Part 1 of this blog gives an overview of an end-to-end example of a geometry-based Web Service. This part dives into some of the interesting details for the implementation of such Web Services. The details are discussed per component that was named in part 1.

Database schema

This one is pretty straightforward. When defining a table column that needs to store geographical data, you can use the MDSYS.SDO_GEOMETRY type. For example:

CREATE TABLE RESIDENCE(
  RESIDENCE_ID INT NOT NULL,
  NAME VARCHAR2(100) NOT NULL,
  GEOMETRY SDO_GEOMETRY,
  CONSTRAINT PK_RESIDENCE PRIMARY KEY (RESIDENCE_ID)
);

You can use the SDO_UTIL package to insert GML into SDO_GEOMETRY types using the SDO_UTIL.FROM_GMLGEOMETRY and SDO_UTIL.FROM_GML311GEOMETRY functions.

See the Oracle Spatial Developer's Guide for more information on the SDO_GEOMETRY type.

ORM layer

In the ORM layer we map database table rows to POJOs and vice versa using JPA. JPA implementations such as EclipseLink provide out-of-the-box mappings between most common Java data types and database columns. To map more exotic and user-defined Java objects you can use Converters in EclipseLink. You can either use an out-of-the-box converter that is shipped with EclipseLink, or code one yourself by implementing EclipseLink's Converter interface. For more information, see this blogpost by Doug Clarke.

In this case we need to map the SDO_GEOMETRY database objects to some sort of Java geometry object. Luckily, EclipseLink ships with an out-of-the-box Converter that maps the SDO_GEOMETRY type to a JGeometry object. The JGeometry class provides all kinds of convenience methods and attributes for working with geographical data. This class is part of the Oracle Spatial Java functionality. It can be used only for Oracle Spatial's SQL type MDSYS.SDO_GEOMETRY and supports Oracle JDBC Driver version 8.1.7 or higher.

To implement the mapping for geographical data we need to do the following:

  • Add the required JARs to the classpath; 
  • Annotate the JPA entities and attributes. 

The JGeometry class and associated Java classes are contained in the sdoapi.jar and sdoutl.jar files. They can be found in the library directory of your Oracle RDBMS installation. Also add the ojdbc JAR to the classpath.

Add a Convert annotation to the geometry attributes in your JPA entities that need to map to the SDO_GEOMETRY database types:

@Column
@Convert("JGeometry")
private JGeometry geometry;

Next, add the StructConverter annotation to the JPA entities containing geometry attributes. The StructConverter is a specific type of EclipseLink converter that provides out-of-the-box mappings to Oracle RDBMS struct types.

@Entity
@Table(name = "RESIDENCE")
@StructConverter(name = "JGeometry", converter = "org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter")
public class Residence implements Serializable

The org.eclipse.persistence.platform.database.oracle.converters.JGeometryConverter provides the actual mapping logic. The name attribute of the StructConverter needs to be same as the attribute value for the Convert annotation. 


Web Service layer

Since GML is an XML format we can use JAXB to generate Java classes for the GML elements that are part of the input and output values of the Web Service operations. There are several ways to generate the JAXB classes including Maven plugins for JAXB or the command-line tool xjc. A simple example of running xjc is shown by the following command:

xjc -d [target dir of generated classes] [XSD root directory] 

In our use case, we had a predefined Web Service interface and used a top-down approach to generate Java classes based on the existing interface. You can use the wsimport tool to generate the JAX-WS artifacts including the Java WebService class from the WSDL.

Note that in this end-to-end scenario the service is exposed as SOAP Web Service. It is simple to expose the same functionality as a RESTful service. You can use JAX-RS annotations instead of JAX-WS annotations to create a RESTful service that exposes geographical data in GML format. See the following example that shows how JPA and JAX-RS can be combined to create RESTful services.

Business logic layer

This layer, among others, provides the logic to map between the JAXB generated classes for the GML elements and the JGeometry objects.

For the conversion from JGeometry objects to JAXB generated classes for GML elements this involves:

  • Use the static methods of the oracle.spatial.util.GML3 class to generate a String containing the textual GML representation of the geographical object;
  • Unmarshall the GML String into the JAXB generated classes.

This is shown in the following code snippet:

JAXBContext jaxbContext = JAXBContext.newInstance("net.opengis.gml");
Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();
JAXBElement jaxbGeometry = null;
String gml = GML3.to_GML3Geometry(jGeometry);
ByteArrayInputStream bais = new ByteArrayInputStream(gml.getBytes());
JAXBElement jaxbGeometry = (JAXBElement) unmarshaller.unmarshal(bais);

The GML3 class and supporting code can also be found in the sdoapi.jar and sdoutl.jar files.

For the conversion from JAXB generated classes for GML elements to JGeometry objects you need to retrieve the geographical data from the GML elements and use the static methods of the JGeometry class to instantiate the proper JGeometry object. For example:

JGeometry geometry = JGeometry.createLinearPolygon(coordinates, spatialDimension, spatialReference);

Read about the deployment specifics for this geometry-based Web Service on Oracle WebLogic Server in part 3 of this blog.

Developing geometry-based Web Services for WebLogic | Part 1

In a recent project we developed Web Services that expose geographical data in their operations. This blog explains the use case for the service, gives an overview of the software architecture, and briefly discusses GML as markup language for geographical data. Part 2 of this blog provides pointers on the implementation of the service while part 3 discusses the deployment on Oracle WebLogic Server.

Use Case

The "BAG" (Basisregistratie Adressen en Gebouwen) is a Dutch national database containing information on all addresses and buildings in the Netherlands, and is maintained by Dutch municipalities. For several object types the BAG also maintains the associated geographical location and shape; for example for premises and cities.

Often organizations have their own GIS/GEO systems and databases for analysis and decision-making purposes. In this particular project we created Web Services to retrieve and modify the geographical data in these GIS/GEO systems based on updates from the national BAG database.

There are of course numerous use cases for geo-based services such as creating mashups with maps and viewers, and offering geo-based public APIs for your customers to integrate with.

Software Architecture

The following figure shows an overview of the software architecture for the Web Services we developed.

Overview of the software architecture for the Web Services 


These services consist of the following components:

  • Database schema containing the geographical, and related administrative data. The schema is located in an Oracle RDBMS 11g Enterprise Edition instance. The geometry is stored as SDO_GEOMETRY type; an Oracle spatial object type that provides methods for storing, accessing, and altering geographical attributes.
  • Object-Relational Mapping (ORM) layer that provides access to the geographical data by converting SDO_GEOMETRY database object types into JGeometry Java objects. The persistency layer is implemented using the JPA (Java Persistence API) standard and EclipseLink as persistency provider. JGeometry is a Java class provided by Oracle as part of the Oracle Spatial Java functionality.
  • Web Service layer that exposes the service and its operations to consumers using SOAP. The operations expose geographical elements as GML (Geography Markup Language) data types. GML is an XML grammar for expressing geographical features that is maintained by the Open Geospatial Consortium (OGC). The Web Service layer is implemented using JAX-WS and JAXB.
  • Business logic layer that contains Java logic for validation and transformation. As part of the transformation logic, this component converts GML elements into JGeometry objects and vice versa.

GML

The following XML snippet shows an example of a GML element. Such elements are part of the input and output of the Web Service operations. In this case, the element denotes a single polygon containing a hole. The polygon is defined by the exterior element, and the hole is defined by the interior element.

<gml:Polygon srsName="urn:ogc:def:crs:EPSG::28992" xmlns:gml="http://www.opengis.net/gml">
  <gml:exterior>
    <gml:LinearRing>
      <gml:posList srsDimension="2">82862.708 436122.616 ... (more coordinates) ... 82862.708 436122.616</gml:posList>
    </gml:LinearRing>
  </gml:exterior>
  <gml:interior>
    <gml:LinearRing>
      <gml:posList srsDimension="2">82832.967 436145.273 ... (more coordinates) ... 82832.967 436145.273</gml:posList>
    </gml:LinearRing>
  </gml:interior>
</gml:Polygon>

Note the following:

  • There are several versions of GML available. The accompanying XML Schemas for GML can be found on the OGC website.
  • GML doesn't define a default Coordinate Reference System (CRS) from which the absolute position of the defined GML elements can be determined. Instead you have to define what coordinate reference system should be used. This is done by specifying the srsName attribute. In the above example, the EPSG::28992 (European Petroleum Survey Group) reference system is used which has the Dutch city of Amersfoort as reference for GML elements. You can find several reference systems at http://spatialreference.org and http://www.epsg-registry.org
  • You need to define the dimension of the GML elements using the srsDimension attribute. In the above example all elements are two-dimensional.
  • GML defines several geographical types. In our use case, a residence or municipality is represented by a point, simple polygon, or multi surface. A multi surface can include multiple polygons, for example when defining two buildings that are not attached but together form one residence. The available geographical types for GML can be found on the OGC website as well.

Part 2 of this blog will dive into some interesting implementation aspects of the Web Service.

Wednesday, July 24, 2013

Supporting multiple ebMS service versions in Oracle B2B

In recent years I've been involved in different Oracle B2B implementations and troubleshooting projects based on ebMS and AS2 protocols. Not so long ago, I came across a project that needed to support multiple versions of an ebMS service. Versioning is quite trivial for SOAP and RESTful Web Services using e.g. different namespaces, version indicators in endpoint locations, and so on. However, it is a bit different for ebMS-based services implemented in a B2B gateway.

Case

ebMS is one of the protocols supported by the Dutch government standard DigiKoppeling for exchanging information between governments such as municipalities, provinces, and departments. In this particular case, the existing integration flow that needed to support a new version of an ebMS service is as follows:



OLO (Omgevingsloket Online) is a government solution where companies and citizens can apply for permits. OLO routes those permit requests, using the ebMS protocol, to the applicable government agency based on the type of permit. In this case Oracle B2B is used by one of these government agencies to process these permit requests. B2B transforms and routes the inbound messages to Oracle Service Bus using JMS, while OSB forwards the requests to a backend application by invoking a SOAP Web Service. Vice versa, outbound messages are transformed by B2B to ebMS and send to OLO. You can read more about this case in a presentation I gave at OUGN.

Problem

OLO offers different versions of its service, among others the StUF LVO 3.05 and StUF LVO 3.11 version. The challenge was to add support for StUF LVO 3.11 while maintaining support for StUF LVO 3.05. In OSB we simply added a new service according to the StUF LVO 3.11 specification, alongside the existing service that processes 3.05 messages. The next section describes the solution for the B2B configuration.

Solution

First step is to define a new "Document Protocol Version" of type "Custom" in Oracle B2B and add the 3.11 "Document Types" to it. For StUF LVO 3.05 we already created a Document Protocol Version containing all of the 3.05 document types.



Next we add the 3.11 document types to the two existing trading partners: OLO and the government agency that processes the permit requests. We also specify if the document can be send or received by that particular trading partner.



To complete the B2B configuration we define the agreements. Here we can make sure that service versioning is supported:

Inbound messages from OLO to B2B to OSB

  • ebMS requires a design-time contract between the service provider and the service consumer. This contract is called CPA (Collaboration Protocol Agreement) and CPP (Collaboration Protocol Profile). For the new version of the service that supports 3.11 a new CPA was created. This CPA has a different CPA identifier than the CPA we created for the 3.05 version. By assigning this new CPA identifier to the newly added agreements using the "Agreement Id" attribute, B2B can distinguish between 3.05 and 3.11 messages.
  • Messages from B2B to OSB are delivered on a JMS queue using a partner channel that is defined for the government agency in B2B. Here we define a new JMS channel that points to a different queue, for example the StUF_LVO_311_Queue. The new OSB service that processes 3.11 messages is listening on this queue, while the existing service processes messages from the 3.05 queue.

Outbound messages from OSB to B2B to OLO
OSB can integrate with B2B by setting specific JMS headers that B2B uses to correlate JMS messages to specific agreements. One of these headers is the DOCTYPE_REVISION attribute. In the new OSB service, we can simply set the header value to the Document Protocol Version that was configured for the 3.11 messages. Now B2B receives both 3.05 and 3.11 messages but is able to differentiate between them using the header properties.

Summary

I'm not the biggest fan of ebMS, but if you need support for it then Oracle B2B is a good product. The ebMS protocol is a point-to-point protocol build upon SOAP. Due to its point-to-point nature and complexity, the ebMS protocol isn't as flexible and doesn't promote reuse the way that "plain" SOAP Web Services and RESTful Web Services do.

However, if ebMS is a constraint in your project than Oracle B2B offers a good out-of-the-box solution that helps you to implement ebMS-based integrations that also support versioning. 

Sunday, June 16, 2013

SOA Black Belt Workshop, Day 4: Solution Areas

Sadly, Friday was the last day of the black belt training. The topic of today was Solution areas. The architecture of OSB was explained by Simone and Rajesh talked about Cloud integration.

OSB Architecture: transports inside out

We have used both OSB and SOA Suite in our projects. Technically, the OSB is part of the SOA Suite. However, the use cases for both are very different. The OSB is a layer to virtualize your services. It offers validation, enrichment, transformation, routing and operation (VETRO) functionality. As Simone emphasized a couple of times, you are not supposed to program business logic in the OSB. You do that in SOA Suite. We discussed what a proper name is for the part of SOA Suite without OSB so you can point to that in your design guidelines. Unfortunately, we did not come up with a satisfying solution.
The presentation discussed the high level architecture briefly, and the core components. This was a bit boring if you had any experience with OSB. I was very pleased when the presentation continued and the architecture of OSB was discussed in depth. The architecture of transports was explained as well as some functional differences between some transports and adapters. This was exactly the kind of information I would expect from a training like this :)  Icing on the cake would have been if the lab would have included creating your own transport or something that showed the MDB that is deployed when you create a JMS transport. Unfortunately the lab was fairly basic and focussed on creating a message flow. It showed all the steps in detail, like many of the hands-on labs that I have attended at Oracle OpenWorld.  Fortunately the other labs during this training were formulated as a specific problem to solve, with little or no hints for the solution. This way you really think about it, instead of just following the steps.
Apart from the architecture of the transports, I took a few new things out of this session: local transports for proxies calling proxies and jejb transports that use POJOs instead of XML.

Cloud integration: fine grained APIs are ruling

The presentation about cloud integration focussed on the challenges one encounters when integrating with cloud solutions.  The example of RightNow, FusionApps and Salesforce.com were used to discuss API styles, lack of a consistent set of standards, governance, security and scalability. Interestingly patterns you might want to use internally for performance reasons (synchronous communication) can be a bad idea in cloud situations because of the unreliability of internet connections. So in cloud integration you might apply asynchronous solutions much more often to cater for that. The content of the presentation was good, but even better was the experience that Rajesh brought in while explaining it. We discussed why packaged apps offer APIs instead of coarse grained services. Therefore it is important that you encapsulate these services in your enterprise and offer coarse grained services to the other service consumers. You can use Oracle SOA Suite to accomplish this.

The day ended with another performance lab, showing the difference between latency and throughput and the effect asynchronous communication has on latency.

And so it ended

This concludes the last blog about the SOA Black Belt training. For me it was definitely worthwhile: I learned a lot, practiced some of that and met great new people.

If anybody is interested in participating, they should contact Juergen Kress. I definitely recommend this training!


Friday, June 14, 2013

SOA Black Belt Workshop, Day 3: Architecture Internals

The topic of today was architecture internals.

Adapters: added value 

We were supposed to start with Fault handling, but the adapter session by Niall was preponed. He is a very good presenter, but unfortunately part of the slide deck was the usual marketing mumbo jumbo about how easy it is to integrate to any system. Since this day was "architecture essentials", I would have enjoyed a discussion on whether you want to use an adapter or build a 'proper' web service in Java more. The lab that accompanied this session was fun: we needed to fix the adapters that were configured beforehand. It was very much like what happens in real life.

Fault handling: expect the unexpected

The session on fault handling did not cover anything new as far as I am concerned. There is a lot of material out there that covers this topic. A lot of emphasis was placed on transaction management. This makes sense in the context of Fault handling, but it made the whole thing a bit repetitive. I would have preferred a shorter presentation about the different fault types and then a lab where we would actually build complicated fault handling scenario's (catchAll, fault policies, rolling back transactions). 

Security: we all have our role

Flavius explained the OPSS framework, virtual directory and how you can manage application roles in Enterprise Manager. This is particularly relevant in a human task service. A lot of people I know use groups in LDAP for that which makes the groups way to fine grained to be manageable. The other feature I was not aware of was using a light weight OVD by turning on 'virtual' in the WebLogic console.


Performance tuning essentials: a journey through the database 

Niall then traced all the database inserts and updates for a simple BPM process. It was interesting, but the title was misleading. The important part to remember is that every design decision you make in your composite as a developer will result in a write to the database. Performance is not always the key requirement, but it is something to take into account, of course.

Fusion apps: oer is the new irep

Niall showed us the Oracle Enterprise Repository for Fusion apps. It contains all the business objects, Web Services and other artefacts that you need to integrate with Fusion Apps. A lot of the integration is still based on an API model, rather than services. From a product vendor perspective it makes sense, but as an implementer you need to make sure that you don't expose all these fine grained services on your Enterprise Service Bus. 

Anti patterns are the new patterns

Ravi Sankaran did the last session over the phone. He presented a number of anti patterns, reasons why they occur and recommendations to follow to avoid them or only apply them if needed. The content was interesting but listening to somebody over the phone in a hot room (27 degrees and humid outside) after three days of training is not the best condition for knowledge transfer. I will have to take a look at the slides as soon as they are available.

The evening

Jürgen organized a boat tour and a really really nice dinner at a very interesting restaurant and handed everyone their blackbelt. It was a great way to talk to some new people and to see something of the city at the same time. 

Tomorrow is the last day, then it is back to reality again...

Wednesday, June 12, 2013

SOA Black Belt Workshop, Day 2: Engine Internals

The second day of the workshop took a deep dive into EDN and two service engines: the Mediator and BPEL.


Mediator Service Engine: parallel routing and the resequencer 

This was very interesting because the threading and transactions within the parallel routing rules were explained. I like to use mediator in composites, but had never encountered the need for parallel routing rules. The explanation helps especially when you look at performance issues in your composite.
The storyline about the fabric was continued nicely in the discussion about the Mediator component: Simone explained what operations were implemented by the service engine. On top of that she told us what type of threads are active when handling parallel routing rules. 
Next up was the resequencer. It is a very powerful mechanism to put messages that are out of order back in sequence. The only caveat is that it is sometimes really difficult to determine the right order. Unfortunately this can't be easily solved by a tool ;) 

EDN: making governance easier

Business events is a concept that is well known to E-Business Suite users. This concept has been used in SOA Suite too. It abstracts away the implementation of a the events (JMS or AQ) from the developer and (the most important feature) makes governance of events easier. Because you can keep track of the events you can subscribe to, and of the subscribers to events. This features is applied in Fusion apps and OER. During this presentation I learned a number of new things: The AQ implementation has a Java API that allows you to publish business events. Secondly, there is an extra queue for Once and Only once implementations (and it makes sense ;) ).

BPEL Service Engine: the Cube engine and idempotent invokes at Starbucks 

I know the BPEL engine fairly well after doing work both in BPEL10g and BPEL11g. So I was afraid that this part would be a bit boring for me. Fortunately, it started with a section about the Cube Engine and the components of this engine. It validated the mental model in my mind and added a lot of information to it. An interesting detail about BPEL 2.0 is it generates the runtime model in memory during load(). Some things about the BPEL engine were new to me: the possibility to set an invoke to non-idempotent (causing it to dehydrate the process), and the methods from the Fabric that are implemented by the engine. Last but not least, the function of the two threads, invoke threads and engine threads were explained. This helps a great deal in tuning the performance of your BPEL engine! 
In the session about correlation Simone told us about a mnemonic from Gregory Hohpe I will never forget: only the customer cares about the correlation id (your name and your drink type). The same is true for your BPEL process: Whenever you implement correlation, remember where to define it by realizing that only the 'waiting' process needs to define correlation sets, like the customer at Starbucks. The barista couldn't care less. He or she only cares about preparing the drink!

We did not have enough time for all the labs; I am going to do the resequencer lab as soon as I publish this post. However, the correlation lab and the performance lab were a lot of fun and especially the performance lab was challenging!

One thing is for sure, after today I know a lot more about the database structure and the threading models that are used by the service engines. This will make tuning a lot easier. Again, I can't wait what tomorrow will bring....

Tuesday, June 11, 2013

SOA Black Belt Workshop, Day 1: Infrastructure Essentials

Today was the first day of the SOA Black Belt Workshop that is organized by Jürgen Kress and is delivered by people from Product Management: Flavius Sana, Simone Geib and Rajesh Raheja.

The theme of the day was infrastructure essentials. There were four topics. I took away the following points:

WebLogic Server essentials for SOA: CAT, Errors and Exceptions

Flavius explained how class loading works in WebLogic. As a JEE developer, this was pretty straight forward. However, I still learned two new things:
  • Using CAT, WebLogic Server class loader analysis tool to analyze class loading on WebLogic (http://localhost:7001/wls-cat/ )
  • The difference between a ClassDefNotFoundError and a ClassNotFoundException. Knowing the difference helps in trying to solve class loading problems. The ClassNotFoundException is an exception from a class loader that tries to load a class that is not available to that specific class loader. The ClassDefNotFoundError is an error because a class that was there during compilation can not be found by another class at runtime. There was an interesting lab to show the differences. 
Tip for the organizers: it would have helped my understanding a lot if you would have drawn a picture that showed in what library or war file what class was residing. Because of the name I was incorrectly assuming the CLTestHelperInner was in the same jar as the CLTestHelper class and it was unclear why the ClassDefNotFoundError was thrown. 

Another interesting topic was the server architecture, including WorkManagers in WebLogic. Unfortunately there was not a lot of time spent on this topic and there we no labs about this either.

SOA Composite essentials: what will 12c bring us?

This was an overview of what a composite consists of. As far as I am concerned this could have been skipped: we were all supposed to have prior knowledge and experience of the SOA Suite. The advantage of covering the subject was that we got to ask Simone a bunch of questions about plans for SOA Suite 12c :D

SOA infrastructure essentials: the stack trace, the soa bundle and the reference endpoint

The first part of this presentation covered some basic stuff, like SCA and Spring. Then it got really interesting: Rajesh showed us the Spring configuration of the Fabric (Service Infrastructure) and the Message routing that occurs when a request is delivered to the Service infrastructure. It makes the stack traces that you find in SOA Suite a lot more readable and usable!

In terms of deployment the one new thing I learned was to combine a MAR (MDS archive) and a SAR (a SCA archive or JAR) into one ZIP file, a SOA Bundle. Usually I deploy the MDS artefacts separate from my composites. The advantage of the SOA Bundle approach is obviously that your MDS artefacts don't get out-of-synch with your composites. The downside is you create new versions of the documents in MDS that are part of the application even if you did not change these artefacts.

Tip from me: Use deployment scripts rather than deployment from JDeveloper or Enterprise Manager and don't go crazy with the SOA bundle option: you don't want to redeploy everything in your workspace every time.

Last but not least he explained that you can only override endpoints from WSDLs that are local to the project (or composite). So overriding the endpoint in Enterprise Manager from a reference that has a remote WSDL location won't work. Unfortunately you can't see this in Enterprise Manager, the only way to know this is to look in the composite.xml and inspect the location of the WSDL. This was shown in the lab. This lab did not add a lot of value in my opinion and could have been skipped.

MDS Essentials: tell me more!

This was a very basic overview. Unfortunately it did not cover the data structure(s), purging and other topics that I was expecting from an advanced class like this. The lab showed some basic stuff like creating connections to MDS and deploying the SOA bundle from JDeveloper. And again we talked about the upcoming version of SOA Suite 12c :D

All in all a very interesting day with interesting people. I can't wait to see what tomorrow will bring!