Tuesday, December 24, 2013

WebLogic Hackathon Pics & Vids

Here are some pictures and videos from the UKOUG Tech 13 WebLogic Hackathon to wrap up my previous posts on this event: introduction and resources. The videos were organized and hosted by Bob Rhubart who is manager of the architect community at Oracle Technology Network.







Preparation of the hackathon








Participants pouring in




Presentations and introduction of the labs







Labs in progress








And the winners of the WebLogic Hackathon !



Sunday, December 22, 2013

Hands-on-lab material available for WebLogic provisioning using Puppet !!

Last UKOUG Tech 2013 conference we organized a Super Sunday WebLogic Hackathon session. Part of this session was a hands-on-lab in which attendees could automatically provision WebLogic Server using Puppet. Several people asked if the materials of this lab can be published so they can do the hands-on-lab themselves.

This blog contains links to the materials and a 5 step plan to get you up and running with the lab. Running and using the lab and VMs is of course at your own risk.

Last but not least, credit goes to Simon Haslam, Peter Lorenzen, and Guido Schmutz for organizing and assisting in the WebLogic Hackathon!

From R-L: Simon Haslam, Peter Lorenzen, Guido Schmutz, Ronald van Luttikhuizen

1. Setup

The setup for the lab is shown in the following figure. Read my previous blog for more info.


2. Introduction to middleware provisioning using Puppet and Chef

The following presentation introduces middleware provisioning using Chef and Puppet. It shows where middleware provisioning fits in the entire process of software delivery, and what benefits the automation of provisioning offers. Finally, the presentation introduces the Hands-on-Lab.


3. Download the VMs

The lab uses a separate VM that acts as Puppet Master and another VM that acts as Puppet Agent. When you run the Hands-on-Lab you should at least have the Puppet Master VM running. You can add an arbitrary number of Puppet Agent VMs. The VMs can be run using Oracle VM VirtualBox.

You can download the VMs from the following locations:

4. Follow the lab instructions & enjoy :-)

The following presentation contains a step-by-step tutorial for completing the lab.


5. Want to know more?

I strongly recommend the following book if you want to know more about provisioning and configuration management using Puppet:

Puppet 3 Beginner's Guide from Packt by John Arundel


Tuesday, November 26, 2013

UKOUG Tech 2013 WebLogic Hackathon - Server Provisioning using Puppet

Last week Peter Lorenzen blogged about the Super Sunday event at the upcoming UKOUG 2013 Tech conference. One of the streams for that day is the WebLogic Hackathon. This stream is presented by an international lineup consisting of Simon Haslam, Peter Lorenzen, Jacco Landlust, Guido Schmutz, and Ronald van Luttikhuizen.

Peter has prepared a lab where participants can perform a scripted installation and configuration of Oracle WebLogic Server 12c. I've prepared a follow-up lab in which we will do a similar installation and configuration, only this time fully automated using Puppet.

Puppet is a tool to automate configuration management. Together with Chef it's one of the more popular configuration management tools at the moment. Puppet allows you to describe the desired (to-be) state of your servers by declaring resources. These declarations can describe user accounts, security settings, packages, directories, files, executable statements, services, and so on. Manifests are the files in which resource declarations are listed. Puppet periodically applies manifests by translating manifests into specific commands (catalogs) and executes those on the managed servers. Puppet is capable of inspecting the machines so it only applies those changes that are necessary. If a machine is already in the desired state Puppet will apply no changes.

The following example shows a simple manifest that can be used to install and configure the Network Time Protocol (NTP) service. The manifest declares that the "ntp" package needs to be present, that the ntp configuration file is copied to the right location, and that the ntp service is running. A change in the configuration file will restart the service.

package { "ntp": 
   ensure  => present 
}

file { "/etc/ntp.conf":
   owner    => root,
   group    => root,
   mode     => 444,
   source   => "puppet:///files/etc/ntp.conf",
   require  => Package["ntp"],
}

service { "ntpd":
   enable     => true ,
   ensure     => running,
   subscribe  => File["/etc/ntp.conf"],
}

Using a configuration management tool to automate server management compared to manual installation and configuration (artisan server crafting) has the following benefits:

  • You eliminate tedious and repetitive work since you only need to write a manifest once and can apply it to as many servers as you want;
  • Puppet manifests are defined in a machine- and OS-independent domain language so the manifests are portable and can be reused;
  • You can keep servers in synch and you know what is running on what server;
  • Manifests can be used as documentation: since manifests are applied by Puppet the documentation is always up-to-date;
  • Manifests can be version controlled and managed the same way you manage other code.

Puppet can be configured to run in a Master/Agent mode, meaning that there is a central Puppet instance (Master) that coordinates the server management. Servers on which a Puppet Agent runs pull the catalog from the Master. The Puppet Master decides what goes into the catalogs. Participants of the WebLogic Hackathon event will be divided in groups of three in which one will act as Puppet Master and two act as Agent. This setup is shown in the following figure:



So, sign-up for the WebLogic Hackathon at UKOUG 2013 Tech and join us for this cool hands-on-lab !!!

If you want to know more about Oracle WebLogic Server please visit Oracle Technology Network. If you want to know more about Puppet I strongly recommend the Puppet 3 Beginner's Guide by John Arundel. Also see Edwin Biemond's Oracle Puppet modules on Puppet Forge.

Tuesday, October 22, 2013

Nordic OTN Tour 2013

This year I am part of the team that is presenting at the Nordic OTN Tour 2013. It covers three countries: Sweden, Denmark and Norway and is organized by the local user groups. Tim Hall, Mike Dietrich, Sten Vesterli and me are presenting on Database and Middleware in all countries.

Sweden, October 22nd

Today we presented in Stockholm. The program can be found on their website.

It was an interesting day, both from a Middleware perspective and from a Database perspective. The user group decided to plan two parallel tracks; the sessions about Middleware in the morning and Database sessions in the afternoon. Because of this, the Middleware sessions were competing with each other and the Database sessions were competing with each other. From that perspective it would have been nicer to have a Middleware track and a Database track running in parallel. The advantage of the Orcan approach however, is that Database people will attend the Middleware sessions that they might otherwise have skipped.




Denmark, October 23rd

The programs are not exactly the same in all three countries. In Denmark there are three parallel tracks, and instead of talking about using Oracle Fusion Middleware 11g to realize a SOA, I will talk about Oracle BPM Suite. The program can be found on the website of the Danish user group. Apart from the people that were at the Swedish day, there are a number of other speakers in Copenhagen:

  • Rasmus Knappe
  • Jørgen Christian Olsen
  • Gordon Flemming
  • Lars Bo Vanting 



Norway, October 24th

The last day is in Oslo. In Norway there are two parallel tracks, like in Sweden. I will do the same presentations: Overview of Oracle SOA Suite 11g and Creating SOA with Oracle Fusion Middleware 11g.

The same team is doing the presentations. In addition there are a few Norwegian speakers as well:

  • Trond Brenna
  • Harald Eri and Bjørn Dag Johansen
The agenda is published here

All in all a very interesting tour, I look forward to meeting the different user groups and spend time with the other ACE Director and talk about Oracle stuff all day ;)


Wednesday, September 25, 2013

OpenWorld and JavaOne 2013 so far! - Part II

In part I of this blog you read about the numbers of OpenWorld and our activities at OpenWorld and JavaOne. So what about the news? Almost all new products and further development of existing products are centered around the support of multi-tenancy, cloud-based computing, and the processing and analyzing of data and events from the billions of devices connected to the Internet (Internet of Things). Furthermore the traction around User Experience keeps growing.


Some concrete news that showcases this:

Oracle announced its in-memory database feature for 12c. This feature speeds up both OLTP and OLAP by storing and accessing data both in row and column format. Enabling the in-memory option for tables and partitions seems pretty simple based on the keynote demo. The in-memory option is transparent and works out-of-the-box: application code doesn't need to be rewritten, and features such as offered by RAC remain intact. Together with the in-memory database, Larry announced the "Big Memory Machine": a new box called M6-32 with lots of cores and RAM aimed at running in-memory databases. Finally, the Oracle Database Backup Logging Recovery Appliance was launched. An appliance specifically designed for protecting databases.

There was an update on the developments in the Fusion Middleware product stack. Service Bus, SOA Suite, BPM Suite, and so on are enhanced for mobile computing and cloud integration. Among others by providing better support for REST and JSON in SOA Suite, new Cloud Adapters to integrate with 3rd parties such as Salesforce, and using ADF Mobile to build code once and run it one various mobile platforms. WebLogic 12c supports Web Sockets and better integration with the multi-tenant Oracle Database 12c. The BPM Suite is shipped with prepackaged processes (accelerators) that can be used as blueprints for several processes. BPM Suite is also extended to support case management.

Coherence GoldenGate HotCache was presented. When using caching there is always the risk of stale data when data is modified outside the caching layer; e.g. directly in the datasource. A popular caching solution is Coherence. The new Coherence GoldenGate HotCache feature enables you to push data updates from the database to the Coherence layer instead of invalidating the cache and/or periodically pulling the data from the caching layer.

There was also news at JavaOne. Java ME and Java SE are being converged in both terms of language and APIs in JDK 8 and further on. This means that developers can use common tooling, that code compatibility increases, and that only one skillset is needed. This will make it easier for Java developers to create code for the "Internet of Things". Both Java SE 8 and ME 8 are available as Early Access downloads. The JDK 8 is scheduled to be released somewhere in the Spring of 2014. Important new features include Lambda expressions (JSR 335), Nashorn (JavaScript engine), Compact Profiles, Date and Time API (JSR 310), and several security enhancement. Lambda expressions are one of the biggest changes in the history of the Java language making it easier to write clean code and avoid boilerplate.

This summer the Java EE 7 spec has been released. Java EE 7 is aimed at providing better developer productivity and to offer better support for HTML 5 through Web Sockets, JSON support, and REST support. Other new features are batch APIs and support for the new, and simplified JMS standard. GlassFish 4, the reference implementation for Java EE 7, is available for download. Project Avatar has been open-sourced, a JavaScript services layer focused on building data services.

The Tuesday keynote was all about the Cloud. Oracle adds 10 new services to the Oracle Cloud including the Compute Cloud and Documents Cloud. Oracle and Microsoft announced a partnership that makes it possible to run Oracle Databases and Oracle WebLogic Servers on Oracle Linux on the Microsoft Azure platform.

This is only the tip of the iceberg. Visit the OpenWorld site to watch all the keynotes.

Tuesday, September 24, 2013

OpenWorld and JavaOne 2013 so far! - Part I

It's that time of the year when we are immersed in the JavaOne and Oracle OpenWorld conference in San Francisco. A conference that always leaves a bit of a shock-and-awe effect due to its enormity. In these blogs you can read about our adventures at OpenWorld and the news so far.

Let's talk numbers first. This year there are even more attendees than last years, approximately sixty thousand (of which 0,0033% work for Vennster). There are 2.555 sessions (of which 0,078% are presented by Vennster, the numbers are getting better :-). The attendees come from 145 countries and various continents: 69% from North America, 21% from EMEA, 6% from Japan and Asia Pacific, and 4% from South America.


Activities

Friends and family members sometimes think that OpenWorld means sitting in a park with a cocktail while watching the America's Cup and touring the Golden Gate Bridge. In reality our stay at OpenWorld is jam-packed with lots of events besides attending presentations, going to the demogrounds and discuss the latest products and features, and going to the hands-on-labs to keep up to date with all of the latest developments:

  • ACE Director briefing: 2 days filled with the latest updates by product management at Oracle HQ
  • Attending Partner Advisory Councils for SOA & BPM, and WebLogic & Cloud Application Framework (CAF)
  • Meeting old friends and new and interesting people at the OTN Lounge and ACE dinner
  • Participating in OTN vidcasts and podcasts
  • Social day and reception with the SOA, BPM, and WebLogic community and Oracle product management hosted by Jürgen Kress
  • Meetups such as the BeNeLux (Belgium, Netherlands and Luxembourg) architects meetup
  • Meetings with customers and partners
  • Present two sessions on UX design patterns and a live development session for Fusion Middleware


The upcoming blog will talk about some of the exciting news here at OpenWorld and JavaOne.

Sunday, September 22, 2013

Case Management - Part 1

I have been using BPMN in projects for a while now. It is very useful in describing the predefined sequence of actions that need to be taken in a specific process.

With BPMN 2.0 we have the option to support the process with IT using a Business Management Process System (BPMS) without having to rewrite the process definition. Oracle BPM Suite is an example of such a BPMS. You take the BPMN 2.0 process definition and implement the activities and events using the Oracle SOA Suite. This is a very powerful concept and works well in certain situations. However, not every process actually follows a limited, predetermined sequence of events. A lot of process management or workflow projects fail because the BPM approach is too restrictive or because the system does not support all the possible scenario's that apply in reality.

BPMN to model non-deterministic processes

So how do we model processes or actions that don't follow a predefined sequence? BPMN offers ad-hoc sub processes. These can be used to depict a number of activities that can be executed in any order any number of times until a certain condition is met. The notation for this is depicted below.
ad hoc sub process
However, none of the BPMS systems I use (Oracle BPM Studio, Activiti) support this construct. You can see this in the screen shot from Oracle BPM below. Sub processes are supported, Event Sub Processes are supported, the creation of ad-hoc tasks by a user or the system at runtime is supported, but Ad-hoc sub processes as defined in BPMN 2.0 are not.

Activities supported by Oracle BPM
There are workarounds for that obviously. You can model a process that offers a choice (gateway) every time a new step is taken and loops around until a certain condition is met. This results in very complex process models that are hard to read and understand and therefore 'breaks' the idea of having a BPMN 2.0 process that is used both by the business as documentation and by IT for implementing it directly in your BPMS. Besides, it does not take into account the fact that you want to be able to add new activities on the fly, during the execution of your process.

Using Case Management Model and Notation 

The OMG defined a new specification: the case management model and notation or CMMN that can be used to model non-deterministic processes. It is currently in beta. A case is defined as: "(...) a proceeding that involves actions taken regarding a subject in a particular situation to achieve a desired outcome." A case has actions, just like a process. The difference is that you don't have to know in advance what the sequence of these actions is, it can be completely ad-hoc. But as the specification continues: "(...) As experience grows in resolving similar cases over time, a set of common practices and responses can be defined for managing cases in more rigorous and repeatable manner." Hence the term Case Management.

The specification describes the following:

  1. Case Management Elements. A semantic description of elements like Case Model Elements, Information Model elements and Plan Model Elements. 
  2. Notation. This describes the notation of elements like Case, Case Plan Models, Case File Items, Stages, Tasks, Milestones, etc. 
  3. Execution semantics that specify the lifecycle of a Case instance, Case File Items, Case Plan Items, Behavior property rules etc. 
Below is an example model of the case "Write Document".  


Now let's compare this model to BPMN. 
  • The model does not show roles, even though the specification states that a case has roles associated with it. In BPMN there is the concept of swim lanes to depict roles. We lose that information when modelling with CMMN. 
  • In BPMN we show messages to model interaction with other 'process pools'. This gives a good overview of dependencies on the outside world. In CMMN we can define a task that calls another case or another process and there are event listeners in the model. The origin of the events are Case File Items, User tasks or timer events. There is no way to depict interaction with the outside world.
  • In BPMN you can model different gateways, including parallel gateways and exclusive or gateways. In CMMN you can not model the fact that stages can occur in parallel in a case. 
  • In BPMN you can not specify that ad-hoc tasks may be defined at runtime. The same is true for this model. You can model a task as a discretionary task. But there is no way to model the fact that certain users may define a task at runtime in a certain stage. Or to model that they may not do that.
  • In BPMN you can't model the content of criteria, or expressions for a certain condition. As you can see that is true for CMMN as well. There is no way to model the rules that determine when a certain criterion is met or when a certain milestone is reached. This has to do with the fact that the information (data) that is part of the case is not modelled. 
  • In BPMN you model data items. In CMMN you model Case File Items. This is a similar concept.
  • In BPMN you model activities, or tasks. In CMMN you model tasks. 
  • In BPMN you model a sequence of activities. In CMMN you model entry and exit conditions and events that cause the conditions to be met. This reflects the fact that BPMN is about a predefined sequence of activities and a case is about a set of actions that need to be executed to reach a certain end result. Unfortunately the desired end result is not modelled. 
Looking at the specification, all the relevant information is defined in the semantic part of the specification, including the execution semantics. The model is not as rich (yet?) and seems to be lacking crucial information. I guess the only way to found out how bad that is, is to start using it...

Next steps

In part 2 I will model a case (applying for a permit) using different notations. Part 3 will show how case management is supported in Oracle BPM Suite using the case component.