This previous blog explained why it is a good idea to address -and handle- business faults separately from technical errors. It also introduced a mechanism used in real life Oracle SOA Suite 11g projects to deal with technical errors in a generic way without having to add this functionality to all our SCA composites again and again. Now it is time to dive into the technical implementation of that mechanism and some nitty gritty details.
First things first: How do we get a hold of these technical errors and how can we determine what to do with them?
Oracle SOA Suite 11g offers a unified fault handling framework for SCA composites and their references, service adapters and components such as BPEL and Mediator components. The framework provides hooks you can use to configure fault handling and possibly call out to your own fault handling code. The unified framework is an improvement compared to the SOA Suite 10g stack that consisted of less integrated components (ESB, BPEL) that had their own fault handling mechanisms. The framework is heavily based on BPEL PM’s 10g fault handling framework.
In SOA Suite 11g you configure the fault handling framework on the level of SCA composites using two files: fault-policies.xml and fault-bindings.xml. By default these files need to be in the same directory as the composite.xml file.
Note that you can place these files somewhere else and have multiple SCA composites point to the same fault handling configuration. MDS is a nice candidate since it is a repository for shared artefacts such as reusable XSD’s, DVM’s, and so on. To do this you need to set the “oracle.composite.faultPolicyFile” and “oracle.composite.faultBindingFile” properties in the composite.xml files and point them to fault binding and policy files in the central MDS location. Whether you use this feature mostly depends on how unique your fault handling per SCA composite will be. For now, we will continue with the basic scenario in which we define fault policies per SCA composite.
First of all we will configure the fault-bindings.xml file. This file defines what elements are bound to what fault policy. Elements can be components, references, service adapters or an entire composite. The actual fault policy that is referred to will be defined later on in the fault-policies.xml file. Since business faults can be dealt with using BPEL activities such as Throw and Catch activities we want to have all remaining faults (all unexpected faults) in the entire composite to be handled the same way.
Let’s say we have a simple SCA composite with an inbound file adapter called “MyInboundFileService” and some other components such as a BPEL and Mediator components. Our fault-bindings.xml file could look like the following:
<?xml version="1.0" encoding="UTF-8"?>
<faultPolicyBindings version="2.0.1"
xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<composite faultPolicy="MyCompositeFaultPolicy"/>
</faultPolicyBindings>
In this example we bind fault handling for the entire composite to the -yet to be defined- policy “MyCompositeFaultPolicy”. Instead of the “composite” element you can use the “component” or “reference” elements to apply fault handling on a more granular level.
Next we need to define the fault-policies.xml file. This file defines the actual policies and the conditions when these policies should be executed.
Following the example we will define a single policy, namely “MyCompositeFaultPolicy”:
As you can see from the example we first define the criteria when the policy should be executed. In this case we want it to be executed in case of any technical error. More specifically in case the error is of type “mediatorFault”, “bindingFault” or “runtimeFault”. Note that we can define more intelligent conditions that can be content-based (e.g. based on process instance variables).
<?xml version="1.0"?>
<faultPolicies xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<faultPolicy version="2.0.1" id="Subsidie_FaultPolicy">
<Conditions>
<faultName xmlns:medns="http://schemas.oracle.com/mediator/faults"
name="medns:mediatorFault">
<condition>
<action ref="MyFaultPolicyJavaAction"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:bindingFault">
<condition>
<action ref="BPELJavaAction"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:runtimeFault">
<condition>
<action ref="MyFaultPolicyJavaAction"/>
</condition>
</faultName>
</Conditions>
<Actions>
<Action id="ora-terminate">
<abort/>
</Action>
<Action id="MyFaultPolicyJavaAction">
<javaAction className="nl.vennster.MyFaultPolicyJavaAction"
defaultAction="ora-terminate">
<returnValue value="ora-terminate" ref="ora-terminate"/>
</javaAction>
</Action>
</Actions>
</faultPolicy>
</faultPolicies>
When the error meets any of these criteria the actions within the “Actions” element will be executed. Instead of configuring default actions such as abort, retry or rethrow we redirect the fault to our own Java class called “MyFaultPolicyJavaAction”. This is allowed as long as such a class implements the “IFaultRecoveryJavaClass” class containing the methods “handleFault” and “handleRetrySuccess”. Since the fault may occur within synchronous processes the fault handling framework needs to know what to do after it delegates the fault to some external piece of code. In order to do so the “handleFault” method needs to return the outcome as String. This outcome should map to a predefined fault action. In our example we abort the process instance after our custom Java class has been executed by returning “ora-terminate” that is mapped to the default abort action. Next to that, Java actions need to define a “defaultAction” attribute in case the outcome cannot be mapped to a predefined fault policy.
For some reason rejected messages need to be defined separately. In other words, such faults remain uncaught when using the above fault handling configuration. An example of a rejected message can be an inbound file that cannot be parsed correctly by a File Adapter. To have rejected messages handled we need to specifically include it using the exact name of the adapter service or reference. In our case the inbound file adapter is named “MyInboundFileService”. Our fault-bindings.xml file now looks like this:
<?xml version="1.0"?>
<faultPolicyBindings version="2.0.1"
xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<composite faultPolicy="MyCompositeFaultPolicy"/>
<service faultPolicy="RejectedMessages">
<name>MyInboundFileService</name>
</service>
</faultPolicyBindings>
Note that you can add more than one adapter name to the “service” element. That way all rejected messages of all adapters can be handled the same way. So for instance you can add “MyOutboundDatabaseService” to the “RejectedMessages” policy too.
<?xml version="1.0" encoding="UTF-8"?>
<faultPolicyBindings version="2.0.1"
xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<composite faultPolicy="MyCompositeFaultPolicy"/>
<service faultPolicy="RejectedMessages">
<name>MyInboundFileService</name>
<name>MyOutboundDatabaseService</name>
</service>
</faultPolicyBindings>
We need to add a fault policy to the fault-policies.xml file so our Java class is executed:
<?xml version="1.0"?>
<faultPolicies xmlns="http://schemas.oracle.com/bpel/faultpolicy">
<faultPolicy version="2.0.1" id="Subsidie_FaultPolicy">
<Conditions>
<faultName xmlns:medns="http://schemas.oracle.com/mediator/faults"
name="medns:mediatorFault">
<condition>
<action ref="MyFaultPolicyJavaAction"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:bindingFault">
<condition>
<action ref="BPELJavaAction"/>
</condition>
</faultName>
<faultName xmlns:bpelx="http://schemas.oracle.com/bpel/extension"
name="bpelx:runtimeFault">
<condition>
<action ref="MyFaultPolicyJavaAction"/>
</condition>
</faultName>
</Conditions>
<Actions>
<Action id="ora-terminate">
<abort/>
</Action>
<Action id="MyFaultPolicyJavaAction">
<javaAction className="nl.vennster.MyFaultPolicyJavaAction"
defaultAction="ora-terminate">
<returnValue value="ora-terminate" ref="ora-terminate"/>
</javaAction>
</Action>
</Actions>
</faultPolicy>
<faultPolicy version="2.0.1" id="RejectedMessages">
<Conditions>
<faultName xmlns:rjm="http://schemas.oracle.com/sca/rejectedmessages">
<condition>
<action ref="MyFaultPolicyJavaAction"/>
</condition>
</faultName>
</Conditions>
<Actions>
<Action id="ora-terminate">
<abort/>
</Action>
<Action id="MyFaultPolicyJavaAction">
<javaAction className="nl.vennster.MyFaultPolicyJavaAction"
defaultAction="ora-terminate">
<returnValue value="ora-terminate" ref="ora-terminate"/>
</javaAction>
</Action>
</Actions>
</faultPolicy>
</faultPolicies>
Read more on fault handling in part III and part IV of this blog series.
Sunday, August 1, 2010
Thursday, July 1, 2010
Fault handling in Oracle SOA Suite 11g - Part I
You generally want to differentiate between technical errors and functional faults within your processes and services. Functional faults are those that have meaning to the business and might be expected. Functional faults and handling these faults can be part of a process. Consider the example of electronic invoice handling in which an invoice is processed that has a total amount of $2000 while an organization only approved an amount of $1500. In this scenario we can use a human task to halt this particular process instance and assign it to the finance department. An employee of the finance department acquires the task and investigates the issue. He or she may conclude that the client sending the invoice was mistaken, that the invoice approval was not entered correctly in our backend IT-systems or that someone put a coffee mug on the invoice and hence the amount was wrongly interpreted by our scanning and OCR software. In any case, after this human intervention the process may continue again and follow the “happy flow” in our BPEL or BPM processes.
When it comes to technical faults you probably do not want to design error handling in the process itself. If you do, your processes and services will end up being cluttered with all kinds of additional process logic such as while loops, gotos, catches, event handling, and so on to try to recover from technical errors. Technical errors might not be recoverable at all; think of an invoice file that is incorrectly formatted, an invoice file that contains negative numbers while your service or process only accepts positive values, or an invoice file that is mangled during transport. Besides, trying to handle these errors makes your SCA composites look like a mix of spaghetti and circuit boards. Not exactly flexible, agile and manageable: the things we wanted to achieve with service- and process-orientation in the first place.
This blog series contains a possible mechanism to generically handle technical errors in your processes and services -that are wrapped as SCA composites- in Oracle SOA Suite 11g.
In one of our projects we came across a scenario in which administrators need to be notified in case of technical errors in any of the SCA composites. Next to the notification they want the corresponding composite to be terminated. Administrators then investigate the cause of the problem and possibly restart the process instances that are involved. Since every employee uses a task-driven portal, administrators want the error to be presented as a human task in this portal instead of receiving a bunch of e-mails. This needed to be implemented with a minimum of additional (business or process) logic.
To achieve this the following mechanism is used:
When it comes to technical faults you probably do not want to design error handling in the process itself. If you do, your processes and services will end up being cluttered with all kinds of additional process logic such as while loops, gotos, catches, event handling, and so on to try to recover from technical errors. Technical errors might not be recoverable at all; think of an invoice file that is incorrectly formatted, an invoice file that contains negative numbers while your service or process only accepts positive values, or an invoice file that is mangled during transport. Besides, trying to handle these errors makes your SCA composites look like a mix of spaghetti and circuit boards. Not exactly flexible, agile and manageable: the things we wanted to achieve with service- and process-orientation in the first place.
This blog series contains a possible mechanism to generically handle technical errors in your processes and services -that are wrapped as SCA composites- in Oracle SOA Suite 11g.
In one of our projects we came across a scenario in which administrators need to be notified in case of technical errors in any of the SCA composites. Next to the notification they want the corresponding composite to be terminated. Administrators then investigate the cause of the problem and possibly restart the process instances that are involved. Since every employee uses a task-driven portal, administrators want the error to be presented as a human task in this portal instead of receiving a bunch of e-mails. This needed to be implemented with a minimum of additional (business or process) logic.
To achieve this the following mechanism is used:
- Use Oracle SOA Suite’s Fault Management Framework to redirect (technical) errors to a custom Java class;
- Have the Java class fire an event containing the unique id of the instance using the Event Delivery Network (EDN) or Advanced Queuing (AQ);
- Terminate the composite instance by using the Fault Management Framework and the outcome of the custom Java class;
- Create a single SCA composite to handle all technical errors. This composite subscribes to the event, gathers information on the faulted composite instance, and presents this information as a human task that is assigned to administrators.
Thursday, June 10, 2010
Experiencing Coaching an User Experience Graduate
Excitement...
The moment I was asked to coach an User Experience graduate from the Hogeschool Rotterdam, I was enthusiastic. In my professional career I’ve been coaching several students and to me this has always been very inspiring. When you are working in the area of User Experience for quite some years like me, working together with students provides me with new viewpoints and insights. These new insights stimulate me to look differently to my own assignments and that is very refreshing.
Feeling of insufficiency...
This time there was one difference with all the other student projects I’d coached before. This specific graduation assignment was defined by my UX colleague at Vennster without my consolidation, because I was involved in other assignments at that time. Thereby this graduation project consisted of a topic of which I had very little knowledge. This definitely was a hurdle to take, especially at the start of the project. Regarding the content I wasn’t able to give the graduate much of advice. That felt as a shortcoming on my behalf.
Fortunately Vennster employs more professionals, who were able to help the graduate with his topic. By asking relevant and provocative questions, we tried to stimulate him to focus his research and come up with refreshing insights and conclusions.
Tip: As a graduation coach you don’t need to know all about the subject. In the end the graduate will be more experienced on the topic than you are anyway. However with your knowledge and experience, you can still stimulate the graduate to explore all far corners of the subject by asking the right questions.
No sense of urgency...
It’s in a sponsor’s interest to keep the graduate on the right track regarding the planning and deliverables within a graduation project which should come to an end within a specific amount of months. Some graduates might find this quite a difficult task to accomplish, possibly because their school assignments mostly have predefined milestones and shorter time spans. Therefore I requested him to set up a planning. Being his sponsor from Vennster, I wanted him to think about what he expected to deliver to me. I also asked him when he would deliver. Due to my experience in coaching other graduation projects, I noticed that for graduates, content often is more important than planning. In a business however as well content as planning are both of equal importances. If the agreed content isn’t delivered in time, it might not be useful/relevant/valid anymore.
Tip: As a graduation coach you need to stimulate (help) the graduate to set up a planning including deliverables and liaisons. The planning should be fine-tuned during the graduation process and deadlines should be kept.
Desperateness...
I expected/asked the graduate to keep me informed about his way of working and his progress. Most of the working hours we were sitting in the same office, so I expected communication would not be a problem.
During their education UX students appear to learn a lot about gathering content and applying design methods. However they seem to have hardly any experience in dealing with a sponsor and the related communication expectations.
Despite of, maybe even due to, all my experience with experienced professionals, I didn’t realize that. I waited in vain until the graduate would keep me posted about the outcomes and hands-on deliverables. I repeatedly asked for information, but it didn’t help. Like this, I didn’t get any information. Instead, I should have planned a regular meeting, at least once every two weeks.
Fortunately, social media can be of great help. I started to follow my graduate on Twitter. I tried to figure out what the problem was. I gave tips and tricks to support his way of working. I gave him deadlines. I felt I wasn’t able to coach him properly. I had become a policewoman.
Tip: In case of a graduation project it sometimes is better to assign the coaching and the sponsoring task to two different people within the guiding company. Then the coaching professional can focus on coaching, while the sponsor plays the role of the ordering customer.
Relief...
Together with my UX colleague Nils Vergeer within Vennster, we each took on one task. This worked great. Nils told the graduate what he as a sponsor expected him to deliver at what time and told him what would happen if he didn’t. In the meantime I could go on supporting the graduation process and supply him with tips and tricks on how to deliver in time.
Tip: As a sponsor you have to be clear in expressing your professional expectations according to deliverables and planning.
Joy...
This worked remarkably well. The graduate found his way and graduated in time by presenting a great report on the subject, supported by a tool he designed to support UX-designers and software developers working closely together.
The moment I was asked to coach an User Experience graduate from the Hogeschool Rotterdam, I was enthusiastic. In my professional career I’ve been coaching several students and to me this has always been very inspiring. When you are working in the area of User Experience for quite some years like me, working together with students provides me with new viewpoints and insights. These new insights stimulate me to look differently to my own assignments and that is very refreshing.
Feeling of insufficiency...
This time there was one difference with all the other student projects I’d coached before. This specific graduation assignment was defined by my UX colleague at Vennster without my consolidation, because I was involved in other assignments at that time. Thereby this graduation project consisted of a topic of which I had very little knowledge. This definitely was a hurdle to take, especially at the start of the project. Regarding the content I wasn’t able to give the graduate much of advice. That felt as a shortcoming on my behalf.
Fortunately Vennster employs more professionals, who were able to help the graduate with his topic. By asking relevant and provocative questions, we tried to stimulate him to focus his research and come up with refreshing insights and conclusions.
Tip: As a graduation coach you don’t need to know all about the subject. In the end the graduate will be more experienced on the topic than you are anyway. However with your knowledge and experience, you can still stimulate the graduate to explore all far corners of the subject by asking the right questions.
No sense of urgency...
It’s in a sponsor’s interest to keep the graduate on the right track regarding the planning and deliverables within a graduation project which should come to an end within a specific amount of months. Some graduates might find this quite a difficult task to accomplish, possibly because their school assignments mostly have predefined milestones and shorter time spans. Therefore I requested him to set up a planning. Being his sponsor from Vennster, I wanted him to think about what he expected to deliver to me. I also asked him when he would deliver. Due to my experience in coaching other graduation projects, I noticed that for graduates, content often is more important than planning. In a business however as well content as planning are both of equal importances. If the agreed content isn’t delivered in time, it might not be useful/relevant/valid anymore.
Tip: As a graduation coach you need to stimulate (help) the graduate to set up a planning including deliverables and liaisons. The planning should be fine-tuned during the graduation process and deadlines should be kept.
Desperateness...
I expected/asked the graduate to keep me informed about his way of working and his progress. Most of the working hours we were sitting in the same office, so I expected communication would not be a problem.
During their education UX students appear to learn a lot about gathering content and applying design methods. However they seem to have hardly any experience in dealing with a sponsor and the related communication expectations.
Despite of, maybe even due to, all my experience with experienced professionals, I didn’t realize that. I waited in vain until the graduate would keep me posted about the outcomes and hands-on deliverables. I repeatedly asked for information, but it didn’t help. Like this, I didn’t get any information. Instead, I should have planned a regular meeting, at least once every two weeks.
Fortunately, social media can be of great help. I started to follow my graduate on Twitter. I tried to figure out what the problem was. I gave tips and tricks to support his way of working. I gave him deadlines. I felt I wasn’t able to coach him properly. I had become a policewoman.
Tip: In case of a graduation project it sometimes is better to assign the coaching and the sponsoring task to two different people within the guiding company. Then the coaching professional can focus on coaching, while the sponsor plays the role of the ordering customer.
Relief...
Together with my UX colleague Nils Vergeer within Vennster, we each took on one task. This worked great. Nils told the graduate what he as a sponsor expected him to deliver at what time and told him what would happen if he didn’t. In the meantime I could go on supporting the graduation process and supply him with tips and tricks on how to deliver in time.
Tip: As a sponsor you have to be clear in expressing your professional expectations according to deliverables and planning.
Joy...
This worked remarkably well. The graduate found his way and graduated in time by presenting a great report on the subject, supported by a tool he designed to support UX-designers and software developers working closely together.
Wednesday, November 4, 2009
Oracle Service Bus article on OTN
The Oracle Service Bus article Eric Elzinga and I wrote is published on Oracle Technology Network (OTN).
The article is aimed at developers and architects who are familiar with Oracle Enterprise Service Bus (OESB) and are (fairly) new to Oracle Service Bus (OSB). The tutorials in this article highlight differences between these two products. The tutorials are based on a workshop in the WAAI community; a collaboration of Dutch consultancies (Whitehorses, Approach, AMIS, and IT-Eye). The goal of the WAAI collaboration is to share, bundle, and expand knowledge on the recent Fusion Middleware 11g release.
The article is aimed at developers and architects who are familiar with Oracle Enterprise Service Bus (OESB) and are (fairly) new to Oracle Service Bus (OSB). The tutorials in this article highlight differences between these two products. The tutorials are based on a workshop in the WAAI community; a collaboration of Dutch consultancies (Whitehorses, Approach, AMIS, and IT-Eye). The goal of the WAAI collaboration is to share, bundle, and expand knowledge on the recent Fusion Middleware 11g release.
Monday, November 2, 2009
Governing events and architect anti-patterns
As the name suggests, SOA is all about services. What about events? In the past, several SOA-efforts tended to neglect events; ultimately causing SOA not to deliver on its full potential or fail altogether. So SOA-practitioners evangelized the use of events. And of course we as IT-industry came up with new terminology to emphasize this: EDA, SOA 2.0, and event-driven SOA to name a few.
This blog is not about promoting events since its importance is (hopefully!) recognized and events are mainstream in nowadays SOA-initiatives. If not, I encourage you to read this blog that explains why events are important from both business and technical perspective. There can be no real SOA without events. Events are just as important as services!
So everything hunky-dory, right? Then why are some SOA-projects using events at runtime to model business processes and their interactions and enable loose-coupling, but neglect to address the governance aspect?
Bottom-line: not only use events at runtime, but make events an integral part of your governance processes just as you do for services and processes. That enables reuse of events, dynamic event-discovery, lifecycle-management of events, and so on.
What I’m wondering though if there is a ‘one-size-fits-all’ solution when it comes to governance of services and events? Does the same taxonomy apply for services and events? Is the lifecycle for services the same as for events? Is the metadata we need and store for effective governance the same for events and services? Do you want to unify governance for services and events?
Some experiences might suggest so. We could structure events into business events, composite events, and elementary events. An event has a contract, interface, and implementation. An event has event producers and event consumers. An event has an owner. An event can be discovered. An event provider can guarantee message delivery. An event can be under development, in production, deprecated, retired, and so on. Replace event with service in these last few sentences and it all seems to fit.
However, I don’t want to rush to conclusions and try to squeeze everything into one all-knowing overall model. Guess that’s a known architect anti-pattern: everything has to fit the boxes we draw and models we think of. Even if reality fails to fit in. We rather try to alter reality then to change our models :-)
Obvious differences would be that the consumers of services are generally known whereas event consumers could be unknown (hence also better decoupling). This has different consequences for services and events when it comes to dependency management and impact analysis. Also, events and services could have some specific attributes such as consumer type for events: single (queue) versus multiple (topic).
In any case, I’m going to find out! For a new customer project I’ll be defining the business, information, and technical architecture around services and service-registries and define their governance processes. And guess what? We’re going to include events in this effort. Let’s see what the result will be.
This blog is not about promoting events since its importance is (hopefully!) recognized and events are mainstream in nowadays SOA-initiatives. If not, I encourage you to read this blog that explains why events are important from both business and technical perspective. There can be no real SOA without events. Events are just as important as services!
So everything hunky-dory, right? Then why are some SOA-projects using events at runtime to model business processes and their interactions and enable loose-coupling, but neglect to address the governance aspect?
- Organizations set up SOA-registries that include and publish services but not events. Service consumers can discover services, reuse them, retrieve metadata such as ownership, contract, interface, and so on. What about event consumers? What about including events in your registry?
- Architects design taxonomies that structure services into various layers (business services, composite services, and elementary services) and domains (finance, CRM, sales, etc.) but have no taxonomy for events.
Bottom-line: not only use events at runtime, but make events an integral part of your governance processes just as you do for services and processes. That enables reuse of events, dynamic event-discovery, lifecycle-management of events, and so on.
What I’m wondering though if there is a ‘one-size-fits-all’ solution when it comes to governance of services and events? Does the same taxonomy apply for services and events? Is the lifecycle for services the same as for events? Is the metadata we need and store for effective governance the same for events and services? Do you want to unify governance for services and events?
Some experiences might suggest so. We could structure events into business events, composite events, and elementary events. An event has a contract, interface, and implementation. An event has event producers and event consumers. An event has an owner. An event can be discovered. An event provider can guarantee message delivery. An event can be under development, in production, deprecated, retired, and so on. Replace event with service in these last few sentences and it all seems to fit.
However, I don’t want to rush to conclusions and try to squeeze everything into one all-knowing overall model. Guess that’s a known architect anti-pattern: everything has to fit the boxes we draw and models we think of. Even if reality fails to fit in. We rather try to alter reality then to change our models :-)
Obvious differences would be that the consumers of services are generally known whereas event consumers could be unknown (hence also better decoupling). This has different consequences for services and events when it comes to dependency management and impact analysis. Also, events and services could have some specific attributes such as consumer type for events: single (queue) versus multiple (topic).
In any case, I’m going to find out! For a new customer project I’ll be defining the business, information, and technical architecture around services and service-registries and define their governance processes. And guess what? We’re going to include events in this effort. Let’s see what the result will be.
Sunday, October 25, 2009
Presentations Oracle OpenWorld 2009
Oracle OpenWorld and Oracle Develop 2009: It’s a Wrap! Just like last year an awesome event! Read about some of the highlights and experiences in this previous blog.
Lonneke Dikmans and I presented the following two sessions on Oracle OpenWorld 2009 that can be viewed here:
Approach to Oracle Fusion Middleware 11g
This session presents an approach to the strategic Oracle Fusion Middleware 11g components, using a customer case and in-depth knowledge of the new Oracle SOA Suite 11g. The case study covers a car leasing firm that migrated from Oracle SOA Suite 10g and Oracle WebCenter 10g to Oracle’s strategic platform with Oracle WebLogic solutions and Oracle Application Development Framework 11g.
Topics:
Portals: The Way to Realize User Experience in a Service-Oriented Architecture? (IOUG/ODTUG)
Portals seem like a natural fit for realizing the front end in a SOA. This session describes two customer cases in which portals were used to present services to end users. In the first case, a Dutch municipality used Oracle Portal in conjunction with Oracle SOA Suite to offer personalized information and products and services to citizens. In the second case, a car leasing company used Oracle WebCenter as a process portal for users for part of its procurement process. In both cases, the portal did not offer the expected benefits to the organization or the end users. The presentation covers possible use cases for the application of portal technology and the critical success factors for portals in SOA and BPM environments.
Lonneke Dikmans and I presented the following two sessions on Oracle OpenWorld 2009 that can be viewed here:
Approach to Oracle Fusion Middleware 11g
This session presents an approach to the strategic Oracle Fusion Middleware 11g components, using a customer case and in-depth knowledge of the new Oracle SOA Suite 11g. The case study covers a car leasing firm that migrated from Oracle SOA Suite 10g and Oracle WebCenter 10g to Oracle’s strategic platform with Oracle WebLogic solutions and Oracle Application Development Framework 11g.
Topics:
- Overview of the customer’s SOA environment and infrastructure
- Migrating to Oracle WebLogic solutions and Oracle Application Development Framework 11g and how a SOA environment affects the transition
- New features of Oracle SOA Suite 11g and how to migrate to it, with a focus on Oracle Service Bus and Service Component Architecture
Portals: The Way to Realize User Experience in a Service-Oriented Architecture? (IOUG/ODTUG)
Portals seem like a natural fit for realizing the front end in a SOA. This session describes two customer cases in which portals were used to present services to end users. In the first case, a Dutch municipality used Oracle Portal in conjunction with Oracle SOA Suite to offer personalized information and products and services to citizens. In the second case, a car leasing company used Oracle WebCenter as a process portal for users for part of its procurement process. In both cases, the portal did not offer the expected benefits to the organization or the end users. The presentation covers possible use cases for the application of portal technology and the critical success factors for portals in SOA and BPM environments.
Tuesday, October 20, 2009
Best practices 4 – Security and Identity Management
This is the fourth blog in a series of BPM and SOA best-practices. The previous blog in this series was on Oracle ESB and Mediator. This blog will discuss security and identity management in an SOA-environment.
So what exactly is it?
IT-security has become more and more important over the last decades. While at first security was frequently treated as a necessary evil, nowadays it has matured into a separate area of expertise. There can be a lot of confusion about what exactly encompasses security and identity management; everyone has a different view on it. When discussing these topics first agree upon its scope before you delve into it. In this blog it will be divided into:
The first three together encompass Identity and Access Management (IAM). “Soft” security like creating security awareness, training employees, applying physical security to buildings and IT-assets, and availability of IT-assets (together with confidentiality and integrity forming the so-called CIA-triad) are out of scope for this blog.
Security and SOA
When compared to traditional software development the important question is not whether security in an SOA-environment is important, but if it is any different -and should therefore be designed differently? The answer to both questions is yes.
To understand why security should be handled differently we need to understand the characteristics of SOA that are key to the security aspect compared to those of traditional software development:
These differences clearly impact the way security should be designed within an SOA-environment. It furthermore warrants the need for an integrated and holistic approach on security in an SOA-environment. Use a layered approach to security as for example promoted by the defense-in-depth strategy.
Externalize security
For a number of reasons it is a good design-principle to externalize identity management and security; even more so in an SOA-environment that frequently consists of heterogeneous infrastructure. Every service having its own IAM and security design and implementation leads to a suboptimal solution, more overhead, and greater chance for security breaches. If security is part of the infrastructure’s components -for instance intertwined in an ESB product- different products will most likely also support different security standards and protocols. E.g. an application server might support SAML version 1.1, the WS-Security Username Token profile, transport security using HTTPS, and LDAPS while the ESB product supports SAML version 2.1, the WS-Security X.509 Token Profile, message security using XML DSIG, and LDAP rather than LDAPS. This is worsened in case external infrastructure supports yet another subset of standards and protocols. This can cause poor interoperability. Use a separate -specialized- component for security instead. This promotes both reuse of better security throughout your SOA-environment and promotes separation of concerns.
The agents and gateway patterns are very well suited to externalize security. Use gateways for appliance of common security policies and agents for more service-specific security policies.
Security classification
Define a limited set of security classifications; for example based on the CIA-triad (confidentially, integrity, and availability) ranging from e.g. “public” to “highly classified”. Determine a minimum set of security measures per classification level. For each new service determine its classification levels; this is usually the responsibility of the service owner. Make classification levels part of your service repository and governance processes. This results in more understandable security regulations, gives better insight in the current and future security of your environment, better reuse of existing security policies, and prevents reinventing the wheel when establishing security for new services. Most important it results in just the right amount of security to be applied; thereby saving money (strive for the lowest possible classification levels without endangering security) while applying (just) enough security.
Transport versus message security
There are roughly two types of security for message invocation: transport and message security. Transport security secures a message only during transport between service consumer and service provider using the transport layer; e.g. using HTTP over SSL or TLS (HTTPS). That means messages are not protected in intermediary components such as an ESB and not protected directly after being received by the endpoint. Message security secures the message itself, mainly through encryption of the payload using for example public and private keys. Since message security can provide security in the scope you want to -so also in intermediaries and after the message has been received- it is generally preferable over transport security. Both transport and message security can be used for authentication (e.g. signature based on certificates), integrity (e.g. digest), and confidentiality (e.g. encryption).
Standards
Maybe trivial but very much important: use standards to promote interoperability. This includes the usage of security standards such as LDAP(S), HTTPS, SAML, XML DSIG, WS-Security (WSS), and other WS-* standards. Using standards results in secured services being reused by (both internal and external) heterogeneous infrastructures. Next to technical standards there are also a number of security reference architectures and principles and guidelines you can leverage.
Before we wrap up some best-practices per area.
Identity management
Use a centralized identity management repository. This avoids duplicate user management and possible inconsistencies. Divide users into different identity types if needed -such as employees, customers, suppliers, and so on since different rules and administration may apply to each category. Be careful in allowing external IT-assets and organizations direct access to your identity management solution. Consider identity provisioning in such cases as external hosting to minimize security risks.
Usually you want a service provider to authenticate the original service consumer (user identity) and not some intermediary component such as an ESB. Implement identity propagation of tokens, username/password, etc. so the service provider authenticates and authorizes the identity of the original user that invoked the service. That implies that all intermediary components between service consumer and provider need to be able to transport identity tokens -and possibly transform these from one format to another (e.g. from SSO token into username/password).
Especially in case of authenticating and authorizing external organizations consider the trade off between using specific identities (Mr. X or Mrs. Y) versus more general identities (organization Z). Specific identities result in better traceability and can provide for more fine-grained access control while more general identities can result in less administration: the number of different identities to manage and synchronize decreases dramatically.
Avoid generic identities such as “consultant” and “trainee” all together.
Authentication
Define a limited set of authentication levels and differentiate on information (password), possession (token, physical key, text message to a phone), and attribute (voice, fingerprint) as mechanisms. E.g. “basic-level” authentication requiring information, “middle-level” authentication requiring information and possession, and “high-level” authentication requiring attribute or possession together with a check of ownership.
Most organizations promote SSO to improve user-friendliness and provide for better user-experience. Determine however if you want SSO for your most classified IT-assets. SSO can provide access to a multitude of IT-assets due to a security breach in only one of the IT-assets. A best-practice is to grant access to IT-assets based on authentication level; if you authenticated using basic authentication, SSO will only grant you access to IT-assets requiring the same or a lower authentication level; not to IT-assets requiring “high-level” authentication.
The SSO-provider needs to be verified and trusted before you can hand over authentication to that provider.
Authorization
Don’t tie rights to IT-assets directly to user identities to avoid high maintenance costs, inflexibility, and lock-in of users. A good design-principle is to use a form of Role-Based Access Control (RBAC) to decouple authorization. Use attributes such as organizational units and/or job titles that do not change frequently over time as intermediary layers in the authorization model. Assign rights in IT-assets to entities in this layer (e.g. organization unit and/or job title) and vice versa assign user identities to these intermediary layer(s). Design the authorization model per identity type (customer, employee, supplier, etc.).
Base authorization on the work/function someone or some organization needs to do; no more, no less. Avoid “super-users”; usually management and/or IT-staff that have gathered much more privileges over time than they’re entitled to. Increase security by assigning more than one role to the various steps in sensitive processes thereby preventing one user to be able to execute the process entirely.
Logging and monitoring
Functionality and processes in an SOA are spread over different loosely-coupled components. Some logging and monitoring needs to be executed on a higher level than on that of an elementary service; but rather on composite service or process level. This gives rise to the need for a central logging and monitoring component that is able to combine and correlate decentral logs and enables monitoring on process-level. The Wire Tap pattern can be used to publish logs, sensors, and other types of messages from services and middleware to the central monitoring component. Notifications can be managed and implemented separately of the logging and notifications can be published by this central monitoring component. Note that this requires synchronization of date and times of the several components that are managed to enable correct correlation. Determine for every service if it is allowed to continue operation in case the central monitoring component fails. Is it e.g. allowed from a security point-of-view to use decentral -localized- logging and monitoring in case the central monitoring component is down?
“Hard” security
A best-practice is to divide security in a number of layers. Chart possible vulnerabilities, threats, and corresponding principles and guidelines to counteract them. This approach results in a more effective and efficient security. Examples of such layers are: network security, platform security, application security, integrity & confidentiality, content security, and mobile security. Examples of principles and guidelines are applying compartioning (network security), to have a central list of allowed and non-allowed file extensions for inbound and outbound traffic (content security), and the use of hardening (platform and application security).
Oracle’s direction
In case of Oracle’s SOA product stack (SOA Suite 11g) security is externalized from almost all products and can be applied using policies. These policies can be configured in a management console and reused by processes and services that are packaged and deployed as SCA composites and components. These policies are based on standards such as WS-Security. Oracle Service Bus (OSB) contains security functionality though. As stated in OSB’s SOD: “The ability to attach, detach, author and monitor policies in a central fashion will be extended to the Oracle Service Bus (as it is has been extended to all other components in the SOA Suite 11g).” In any case you can already secure OSB projects using OWSM.
So what exactly is it?
IT-security has become more and more important over the last decades. While at first security was frequently treated as a necessary evil, nowadays it has matured into a separate area of expertise. There can be a lot of confusion about what exactly encompasses security and identity management; everyone has a different view on it. When discussing these topics first agree upon its scope before you delve into it. In this blog it will be divided into:
- Identity management
- Authentication; including Single Sign-On (SSO)
- Authorization
- Logging and monitoring
- “Hard” security; more technical security including confidentiality and integrity of data, usage of firewalls, IDS/IPS products, and so on.
The first three together encompass Identity and Access Management (IAM). “Soft” security like creating security awareness, training employees, applying physical security to buildings and IT-assets, and availability of IT-assets (together with confidentiality and integrity forming the so-called CIA-triad) are out of scope for this blog.
Security and SOA
When compared to traditional software development the important question is not whether security in an SOA-environment is important, but if it is any different -and should therefore be designed differently? The answer to both questions is yes.
To understand why security should be handled differently we need to understand the characteristics of SOA that are key to the security aspect compared to those of traditional software development:
- Next to human-machine interaction there is more machine-machine interaction. This means there is a greater need for automated security mechanisms for purposes of authentication, authorisation, encryption, and so on.
- A SOA-environment generally contains more intermediary stations such as ESB’s and other middleware components. There are more locations for users and administrators to view -possibly confidential- message contents such as credit card information. In this case transport security alone is not enough.
- How can you manage and control various (external) clients that want to access data and/or services if systems are loosely-coupled? E.g. not every client must be allowed to invoke a banking service.
- SOA results in more Straight-Through-Processing (STP) meaning processes are more frequently executed in an entirely automated fashion without human interference. Good security is key since consequences of possible security breaches could be detected later on. Also, the consequences can be graver due to the possible large amount of process instances.
- Services are invoked by both internal and external consumers. A service’s security level is usually determined by its owner. In case of external services, security will be largely determined and enforced outside an organization’s own span-of-control. The level of security determines the consumers trust: “What happens with my data if a service is not secured?”, “Can I trust a service’s result?”, and so on.
These differences clearly impact the way security should be designed within an SOA-environment. It furthermore warrants the need for an integrated and holistic approach on security in an SOA-environment. Use a layered approach to security as for example promoted by the defense-in-depth strategy.
Externalize security
For a number of reasons it is a good design-principle to externalize identity management and security; even more so in an SOA-environment that frequently consists of heterogeneous infrastructure. Every service having its own IAM and security design and implementation leads to a suboptimal solution, more overhead, and greater chance for security breaches. If security is part of the infrastructure’s components -for instance intertwined in an ESB product- different products will most likely also support different security standards and protocols. E.g. an application server might support SAML version 1.1, the WS-Security Username Token profile, transport security using HTTPS, and LDAPS while the ESB product supports SAML version 2.1, the WS-Security X.509 Token Profile, message security using XML DSIG, and LDAP rather than LDAPS. This is worsened in case external infrastructure supports yet another subset of standards and protocols. This can cause poor interoperability. Use a separate -specialized- component for security instead. This promotes both reuse of better security throughout your SOA-environment and promotes separation of concerns.
The agents and gateway patterns are very well suited to externalize security. Use gateways for appliance of common security policies and agents for more service-specific security policies.
Security classification
Define a limited set of security classifications; for example based on the CIA-triad (confidentially, integrity, and availability) ranging from e.g. “public” to “highly classified”. Determine a minimum set of security measures per classification level. For each new service determine its classification levels; this is usually the responsibility of the service owner. Make classification levels part of your service repository and governance processes. This results in more understandable security regulations, gives better insight in the current and future security of your environment, better reuse of existing security policies, and prevents reinventing the wheel when establishing security for new services. Most important it results in just the right amount of security to be applied; thereby saving money (strive for the lowest possible classification levels without endangering security) while applying (just) enough security.
Transport versus message security
There are roughly two types of security for message invocation: transport and message security. Transport security secures a message only during transport between service consumer and service provider using the transport layer; e.g. using HTTP over SSL or TLS (HTTPS). That means messages are not protected in intermediary components such as an ESB and not protected directly after being received by the endpoint. Message security secures the message itself, mainly through encryption of the payload using for example public and private keys. Since message security can provide security in the scope you want to -so also in intermediaries and after the message has been received- it is generally preferable over transport security. Both transport and message security can be used for authentication (e.g. signature based on certificates), integrity (e.g. digest), and confidentiality (e.g. encryption).
Standards
Maybe trivial but very much important: use standards to promote interoperability. This includes the usage of security standards such as LDAP(S), HTTPS, SAML, XML DSIG, WS-Security (WSS), and other WS-* standards. Using standards results in secured services being reused by (both internal and external) heterogeneous infrastructures. Next to technical standards there are also a number of security reference architectures and principles and guidelines you can leverage.
Before we wrap up some best-practices per area.
Identity management
Use a centralized identity management repository. This avoids duplicate user management and possible inconsistencies. Divide users into different identity types if needed -such as employees, customers, suppliers, and so on since different rules and administration may apply to each category. Be careful in allowing external IT-assets and organizations direct access to your identity management solution. Consider identity provisioning in such cases as external hosting to minimize security risks.
Usually you want a service provider to authenticate the original service consumer (user identity) and not some intermediary component such as an ESB. Implement identity propagation of tokens, username/password, etc. so the service provider authenticates and authorizes the identity of the original user that invoked the service. That implies that all intermediary components between service consumer and provider need to be able to transport identity tokens -and possibly transform these from one format to another (e.g. from SSO token into username/password).
Especially in case of authenticating and authorizing external organizations consider the trade off between using specific identities (Mr. X or Mrs. Y) versus more general identities (organization Z). Specific identities result in better traceability and can provide for more fine-grained access control while more general identities can result in less administration: the number of different identities to manage and synchronize decreases dramatically.
Avoid generic identities such as “consultant” and “trainee” all together.
Authentication
Define a limited set of authentication levels and differentiate on information (password), possession (token, physical key, text message to a phone), and attribute (voice, fingerprint) as mechanisms. E.g. “basic-level” authentication requiring information, “middle-level” authentication requiring information and possession, and “high-level” authentication requiring attribute or possession together with a check of ownership.
Most organizations promote SSO to improve user-friendliness and provide for better user-experience. Determine however if you want SSO for your most classified IT-assets. SSO can provide access to a multitude of IT-assets due to a security breach in only one of the IT-assets. A best-practice is to grant access to IT-assets based on authentication level; if you authenticated using basic authentication, SSO will only grant you access to IT-assets requiring the same or a lower authentication level; not to IT-assets requiring “high-level” authentication.
The SSO-provider needs to be verified and trusted before you can hand over authentication to that provider.
Authorization
Don’t tie rights to IT-assets directly to user identities to avoid high maintenance costs, inflexibility, and lock-in of users. A good design-principle is to use a form of Role-Based Access Control (RBAC) to decouple authorization. Use attributes such as organizational units and/or job titles that do not change frequently over time as intermediary layers in the authorization model. Assign rights in IT-assets to entities in this layer (e.g. organization unit and/or job title) and vice versa assign user identities to these intermediary layer(s). Design the authorization model per identity type (customer, employee, supplier, etc.).
Base authorization on the work/function someone or some organization needs to do; no more, no less. Avoid “super-users”; usually management and/or IT-staff that have gathered much more privileges over time than they’re entitled to. Increase security by assigning more than one role to the various steps in sensitive processes thereby preventing one user to be able to execute the process entirely.
Logging and monitoring
Functionality and processes in an SOA are spread over different loosely-coupled components. Some logging and monitoring needs to be executed on a higher level than on that of an elementary service; but rather on composite service or process level. This gives rise to the need for a central logging and monitoring component that is able to combine and correlate decentral logs and enables monitoring on process-level. The Wire Tap pattern can be used to publish logs, sensors, and other types of messages from services and middleware to the central monitoring component. Notifications can be managed and implemented separately of the logging and notifications can be published by this central monitoring component. Note that this requires synchronization of date and times of the several components that are managed to enable correct correlation. Determine for every service if it is allowed to continue operation in case the central monitoring component fails. Is it e.g. allowed from a security point-of-view to use decentral -localized- logging and monitoring in case the central monitoring component is down?
“Hard” security
A best-practice is to divide security in a number of layers. Chart possible vulnerabilities, threats, and corresponding principles and guidelines to counteract them. This approach results in a more effective and efficient security. Examples of such layers are: network security, platform security, application security, integrity & confidentiality, content security, and mobile security. Examples of principles and guidelines are applying compartioning (network security), to have a central list of allowed and non-allowed file extensions for inbound and outbound traffic (content security), and the use of hardening (platform and application security).
Oracle’s direction
In case of Oracle’s SOA product stack (SOA Suite 11g) security is externalized from almost all products and can be applied using policies. These policies can be configured in a management console and reused by processes and services that are packaged and deployed as SCA composites and components. These policies are based on standards such as WS-Security. Oracle Service Bus (OSB) contains security functionality though. As stated in OSB’s SOD: “The ability to attach, detach, author and monitor policies in a central fashion will be extended to the Oracle Service Bus (as it is has been extended to all other components in the SOA Suite 11g).” In any case you can already secure OSB projects using OWSM.
Subscribe to:
Posts (Atom)