Tuesday, October 23, 2018

Even Aces make mistakes: Reuse

Every year Debrah Lilley is kind enough to round up a couple of Oracle Aces to talk about projects, and/or new features at Oracle Open World.
This year we present a session with 9(!) people about our 'favorite' mistake we made and what we learned from it.

From the summary:
"A quick diversion from our now traditional featured short talks, this year our EMEA ACE Directors will share their biggest errors and what they learned from it. A fast, fun session that will energize you on your Oracle OpenWorld 2018 experience."

I have five minutes to talk about my favorite mistake, so this blog post elaborates on the topic a bit more.

Reuse

In software, we like to reuse code. This enhances productivity, it minimizes the chance of introducing new mistakes and gives us a chance to focus on the new things we are trying to accomplish, instead of reinventing the wheel. This has been a very strong driver of Service Oriented Architecture (SOA). 
There are multiple ways of implementing or realizing reuse:
  • create a runtime component that you call from your code to do the job ('a service')
  • create a library or code that you share from a central location and import in your code ('a common artifact in SOA Suite in MDS, or a library in Java or Node.js)
  • a pattern ('template') that you apply as a best practice 
  • a copy (!?) of the code you want to reuse and adapt. 
One of the downsides of reuse, is flexibility loss. If you are reusing something, a change to it will impact all the consumer or users of that piece of code. service etc. This is why micro services architecture is all about bounded contexts and decoupling using events. 

In SOA there has been a very common misconception: the fact that reuse is good, no matter what, resulting in inflexible very complex code that is expensive to change and, in the end, hard to reuse (!)
This particularly happens a lot with canonical models, and yes, I made that mistake too.

Let's go back in time, to a project where we created an application for the province of Overijssel for permits and grants, using Oracle SOA Suite 11g, a content management. The solution basically consisted of 4 different composites (Apply, Process, Decide, Notify), the Oracle Servicebus, a content management system, SAP ERP, a permit application and a.NET user interface.

For the integration between the .NET interface and the BPEL process support, we designed a canonical model.

Canonical model side note


The picture below shows what a canonical mode is supposed to do: make sure that in the servicebus and between applications, a canonical model is used.


End of side note


However, we decided to use the canonical model inside of our BPELs as well. This had several advantages:
  1. we did not have to design the WSDL and XML Schemas for the BPEL
  2. we could assign the entire message to the BPEL input and output variables

Change

We were done very fast, but we found we needed a number of changes in the canonical model to facilitate the .NET application. They wanted different structures and different fields. None of these had any functional impact on process flow, since the BPEL was mostly driving the process, adding approvals, enrichments and fetching and storing data in the correct backend system.
However, because we used the canonical model in all of our BPELs, we had to redo all of our assignments and xslts to pick up this new version, making sure nothing changed in the flow and functionality of the component.
The change cost the same amount of time as the inital realization, because the work in BPEL is not in clicking together the scopes, but in building the logic in the assignments and xslts.

Lessons learned

We learned a number of things from this particular approach:

  1. Separation of concerns matters. This has been a long standing IT and computer science principle. See for example (https://en.wikipedia.org/wiki/Separation_of_concerns). There is a reason you put different groups in your electrical wiring in your house: if one of the groups fails, the other parts of the house are unaffected. When you have to make a change for a specific purpose, you should have to make that change in one place only. In case of the canonical model, in the servicebus and the components that are calling the servicebus.
  2. Reuse has an impact on changeability and maintainability. To make sure things can be changed, apply the tolerant reader pattern ((https://martinfowler.com/bliki/TolerantReader.html)
  3. Pick your reuse pattern carefully: a pattern you want to reuse (or a template), a library that you will apply, a service that you want to call, or creating a copy to be quick, but that you will be able to change independently.
  4. Don't send data to systems that don't care about it. In this case were were sending data and dragging it along in the BPEL that were not needed in the BPEL. Typically all you need in the BPEL are identifiers and dates. Then, when you need more data in a specific scope, you can fetch it from the appropriate source. 
We refactored the code and made specific WSDLs and XSDs for all BPELs and used the canonical model in the OSB, where it belongs. 
It became a lot easier to work with the messages in the BPEL and changes in the UI had no or little impact on the process, as they should!

Code smells

So how do you know if you are reusing the canonical model too much. These are some tell tales that point in that direction:

SOA Suite

  1. You have to redeploy 'MDS' everytime a new feature is defined
  2. All your artefacts (WSDLs, XSDs, DVMs) are in MDS, there are no 'local composite' artefact
  3. You have a lot of data in your messages in BPEL that are in none of the assigns or in only one of the assigns
  4. You have a lot of merge conflicts with other teams that are working on completely different topics
  5. You have a if statements in your BPEL that never merge back into 1 common flow, based on something in your canonical model

Java, Javascript

  1. Your ORM library gets updated all the time and your code needs to change because of it.
  2. You filter out values and fields in most of your code
  3. You are extending the common objects to add your own behavior and attributes and filter out the common behavior. 

Conclusion

Reuse is a powerful and important topic to be productive and to avoid mistakes. However, reuse is not a goal, it is a means to an end.  Modern systems need to be changeable in the first place. Reuse should be carried out within the context as appropriate. Global models should be treated carefully and be minimized to where they add value and are needed. 
We can all learn from domain driven design principles and microservices architectures in our other architectures as well to make sure we don't paint ourselves into a corner!

Happy coding 😎

Sunday, December 10, 2017

Troubleshooting Oracle API Platform Cloud Service

One of the challenges when working in integration is troubleshooting. This becomes even more challenging with when you start using a new product.

Recently I worked with Oracle Product management (Thank you Darko and Lohit) to troubleshoot issues with an OAuth configuration of APIs in Oracle API Platform Cloud Service.

Setup

The setup was as follows:
  1. An API Gateway node deployed to Oracle Compute Cloud Classic as an infrastructure provider
  2. Oracle Identity Management Cloud Service in the role of OAuth provider
We setup an API with several policies, including OAuth for security. When we called the service, it gave us a '401 unauthorized' error.

Oracle API Platform Cloud Service troubleshooting

The Oracle API Platform Service offers analytics for each API. You can navigate there by opening the API Platform Management portal, click on the API you want to troubleshoot and click on the Analytics tab (this is the bottom tab).

Click on Errors and Rejections, after setting the period you are interested in. Usually when you are troubleshooting, you would like to see the last hour.

Different type of analytics in an API











Now you can scroll down to error distribution and see the errors that occurred. In this case, because I selected "Last Week" you see a number of different errors that occurred last week and how often they occurred. When you run your test again, you will see one of the errors in the distribution increase, giving you insight in the type of error.

Distribution of each error type









We tried different configurations, as you can see from the distribution, the graph tells us that the OAuth token was invalid and that in another case we had a bad JWT key. This mean we had to take a look at the configuration of the OAuth profile of the Oracle API Gateway Node. (see the documentation on how to configure Oracle Identity Cloud Service as OAuth provider).

OAuth token troubleshooting 

We had a token, but it appeared to be invalid. It is hard to troubleshoot security: what is wrong with our configuration? Why are we getting the erors that we get? When you successfully obtain an OAuth token, you can inspect it with JSON Web Toolkit Debugger. 
  1. Navigate to https://jwt.io
  2. Click on Debugger
  3. Paste the token in the window at the left hand side
JWT debugger with default token example










The debugger shows you a header, the payload and the signature.

Header 
Algorithm that is used, for example SHA256 and types supported (JWT for example)
Payload 
The claim is different per type. There are three types: public, private or registered. A registered claim contains fields like iss (issuer, in this case https://identity.oraclecloud.com/ ), sub (subject), aud (audience) etc. See for more information: https://tools.ietf.org/html/rfc7519#section-4.1
Signature
The signature of the token, to make sure nobody tampered with it.

Now you can compare that to what you have put in the configuration of Oracle Identity Management Cloud and the configuration of the Oracle API Gateway Node.


Oracle API Platform Gateway Node trouble shooting

Apart from looking at the token and the analytics it can help to look at the log files on the gateway node. The gateway node is an Oracle WebLogic Server with some applications installed on it.

There are several log files you can access.
  1. apics/logs. In this directory you find the apics.log file. It contains stacktraces and other information that help you troubleshoot the API.
  2. apics/customlogs. If you configured a custom policy in your API, the logfiles will be stored in this directory. You can log the content of objects that are passed in this API. See the documentation about using Groovy in your policies for information about the variables that you can use. 
  3. 'Regular' Managed server logs. If something goes wrong with the connection to the Derby database, or other issues occur that have to do with the infrastructure, you can find the information in /servers/managedServer1/logs directory.

Summary

When troubleshooting APIs that you have configured in Oracle API Platform cloud service you can use the following tools:
  • jwt.io Debugger. This tool lets you inspect OAuth tokens generated by a provider.
  • Oracle API Platform Cloud Service Analytics. Shows the type of error.
  • Oracle API Platform logging policies you put on the API. Lets you log the content of objects. 
  • Log files in the API Gateway node:
    • {domain}/apics/logs for the logs of the gateway node. Contains stracktraces etc
    • {domain}/apics/customlogs for any custom logs you entered in the api
    • {domain}/servers/managedServer1/trace for default.log of the managed server
 Happy coding!

Thursday, November 30, 2017

Sharing Applications within your team in VBCS, PCS and MCS

Oracle Visual Builder Cloud Service is a so called 'low code' platform to build user interfaces in the cloud, without the need for a IDE or installing and deploying to servers.

This can be done by a single user, but this can also be used in a team setting.

Unfortunately it is not very intuitive how you can share different applications within a team. By default, when you create an application, only the person who created it will see it.

This is different than in other cloud products like MCS and PCS. In MCS you can see all mobile backends, APIs etc once you have the role 'team member'.  In MAX applications are visible by anyone with the role MobileEnvironment_Develop

In PCS you can create spaces to allow sharing of applications.


Oracle Product Shared
Process Cloud Service (PCS) Either private or shared in a space
Mobile Cloud Service (MCS) To users with role teammember
Mobile Accellerator (MAX) User that have the role MobileEnvironment_Develop
Visual Builder Cloud Service Explicit assignment to users with role teammember

How to share your VBCS application

  • Go to the "Home" tab 
  • Flip the card of your application 
  • Put the name of the team member (only existing teammembers show up) 
  • Click + 


 The application is now visible for those team members you added.

Conclusion 

The feature is not hard to understand but hard to find. It is not well documented and the concept is implemented differently in different (related) PAAS products.


Happy coding configuring 😉

PS: in the new documenation of VBCS I found a chapter describing this feature, it is improved faster than I can blog! 

Monday, October 2, 2017

My First Blockchain in the Oracle Cloud

Being an integration person I am very interested in Blockchain architecture as an alternative for 'traditional' integration options like 'Enterprise SOA' and Centralized Master Data Management.

This blog will not discuss the architecture of blockchain, but describes how you can get started as a developer with blockchain using Oracle Cloud.

It assumes you have an instance of Oracle Compute Cloud at your disposal and describes how to get started wth MultiChain.

I executed the following steps:
  1. Create two instances of Oracle Compute Cloud
  2. Install MultiChain
  3. Create the first chain

Create two instances of Oracle Compute Cloud

To show the concept of the block chain you need to create at least two nodes. In this case I created two instances of Oracle Compute Cloud Classic. 

  1. Login to your identity domain: cloud.oracle.com
  2. On the Dashboard, select "Create Instance" and pick "Compute Classic"
  3. This gives you an overview as shown in the figure below. First you need to save the private key to be able to access the instance with ssh. 
    1. Type a name for the key
    2. Save it on your computer
  4. Select "Customize"
  5. Finish the Wizard
    1. Select the image "OL_7.2_UEKR4_x86_64"
    2. Select a general purpose shape, unless you are planning to build a real blockchain, then it is recommended to create a High Memory Shape
    3. Add the key you saved to the instance, or upload your own key
    4. For network leave the defaults
    5. Leave defaults in the storage tab 
  6. You can see the result in the orchestrations tab
Orchestration showing the instances that are created












In order to create a chain and have multiple nodes that are aware of each other, you need to make sure the instances can communicate.

To enable this, you need to add a Security Application and a Security Rule for the instances. (Many thanks to Simon Haslam who helped me figure this out!) :

  1. Click on the Network tab in my instances
  2. Select Shared Network
  3. Select Security Application
    1. Create a security Application for te default port
      1. Name: TCP2671
      2. Protocol: TCP
      3. Port start: 2671
      4. Port end: 2671
      5. Description: default Multichain port
    2. Create a security application for the RPC port
      1. Name: TCP2670
      2. Protocol: TCP
      3. Port start: 2670
      4. Port end: 2670
      5. Description: default RPC multichain port
  4. Select Security Rules
  5. Click Add Security Rule
  6. Fill in the fields
    1. TCP2671 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)
  7. Add another security rule for the default RPC port
    1. TCP2670 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)




Now you are ready to create the second instance. Note that you don't have to set the security rules for new instances, because they are pointing to the default security list in this case. The new instances are added to that automatically. This is of course not something you would do in real life, but for My First Blockchain in the Oracle Cloud it works fine :) 


Install MultiChain

You need to install MultiChain on all your instances. 

To connect you need a ssh client. I am currently working on a Windows 10 machine, so I use PuTTY. The key that is downloaded from the cloud can't be used directly in PuTTY, you need to import it. 
  1. Open PuTTYgen
  2. Click import key
  3. Browse to the key you just saved to your computer
  4. Click on "Save Private Key"
  5. Open PuTTY
  6. Enter the details of the instance as shown below and save the session
    1. Put in the public IP address of the instance
    2. Point to the SSH private key in the Auth tab
  7. Open a session
  8. Install wget: sudo yum install wget -y
  9. Run the following in the terminal:

cd /tmp
wget https://www.multichain.com/download/multichain-1.0.1.tar.gz
tar -xvzf multichain-1.0.1.tar.gz
cd multichain-1.0.1
sudo mv multichaind multichain-cli multichain-util /usr/local/bin
Repeat this for the second instance and if you are testing with more than one, for all the instances you want to include.

Creating my first chain

Now you are ready to start the tutorial to get acquainted with MultiChain.

In order to keep track of the different servers, I have given the nodes different colors in my PuTTY sessions.

  1. Connect to node 1 and execute the commands that are referenced as 'server1'  in the tutorial in this window
  2. Connect to node 2 and execute the commands that are references as 'server2' in the tutorial in this window
Note that I need to open two windows of PuTTY for server 2: One to start the node:

multichaind chain1@[ip-address]:[port]

And one to start the multichain-cli chain1 interaction. If you don't start the node, you will get an error:

MultiChain 1.0.1 RPC Client


Interactive mode
chain1: getinfo
error: couldn't connect to server
With my two windows open, the interactive client was working like a charm... 


Three windows testing MultiChain from my windows machine















Happy coding!

Thursday, September 7, 2017

Oracle Mobile Cloud Service: team members accessing APIs

Oracle Mobile Cloud Service is setup around different personas. When you login to MCS you see a list of roles.


The Mobile App developer is the developer that is creating a mobile application and is using APIs that are exposed in MCS. The Service developer is the developer that creates the APIs, connectors etc.

This distinction is very useful: it helps in making sure the documentation is targeted to the right people and the content is organized in a way that makes sense for these different personas.

Sometimes, however, it does not work as well. As I discussed in an earlier post: the command line tools to deploy and test services is hidden in software that is targeted at mobile app developers.  In this post I want to discuss another use case that is not that obvious apparently: a service developer calling an API.

In our project we are creating APIs for several different mobile app developers. Before we publish an API and a mobile backend we want to test this mobile backend. We secured the APIs with security roles, because we want to make sure that APIs for internal use are not accidently exposed to external companies.  To be able to test the API, we had to assign the correct role to the team member. This is when the challenge started.

Assigning roles to users in MCS
There are two ways of assigning roles to users for MCS. The first method is using the MCS Mobile User Management, which can be accessed using the mobile portal ui:

Manage users and roles from the MCS portal
















The second way is using the services dashboard, using the "Users" tab and clicking "manage roles" for a specific user:
Manage roles of a user using the "users tab" from the service dashboard


















But before you can assign the role, you have to go to the user management part of the MCS and create a role in a realm. This is described in the documentation of MCS: "Set Up Mobile Users, Realms and Roles". It also describes naming conventions, for example for roles: 

"The naming convention for Oracle Cloud custom roles that represent MCS mobile user roles is: {serviceName}_MobileEnvironment_{rolename}. For example, for a role named “APIRole” in the environment with service name “poeo342ed” the custom role in Oracle Cloud would be poeo342ed_MobileEnvironment_APIRole." 

Then you can create a new user account for the tester and assign the newly created role to this user. (S)he then uses account after logging into MCS and changing the default password to test the API from postman, adding the username and password as Basic Authentication to the header of each call.

This all works very nicely. But we wanted to assign this role to ourselves. However, the users that were set up as team members did not show up in the MCS User management list of users. 

So I navigated to my services dashboard again and clicked on the tab "custom roles". Sure enough the role I just created was there, so I assigned my own user to the role and added my username / password to the Postman API call.
The result: "Unauthorized"

Solution

Users should not only be assigned to a role to access an API, they should also be part of the realm that is associated with the mobile backend. Unless, of course you use anonymous access, which does not work when you are securing your APIs with roles.
In our example we did not explicitly create a realm, so the mobile backend was associated to the Default Realm. We created the new tester in the default Realm, so this 'role' was automatically assigned to this user. The team members were not added to that realm and therefore did not show up in the list of users of the default realm.

We added the team members to the Default 'role', which corresponds to the realm in the mobile user management instance and voila! We can access the APIs :)

Role that represents the default realm in the mobile cloud service instance








Happy coding!

Sunday, July 23, 2017

MCS implementation: How to deploy your custom code with additional libraries

When implementing APIs that you have defined in Oracle Mobile Cloud Service (MCS), you don't want to reinvent the wheel.  That is why it is important to know how to deploy other libraries with your custom code implementation.

The custom code service that you use when implementing your APIs is backed by the following libraries:

  • Node
  • Request
  • Express
  • Bluebird
  • Custom code SDK
The internal code of MCS uses these libraries. These libraries (with the exception of the '.then' construct from bluebird) are not available for you in your custom code, unless you add them to your custom code zip file. So let's look at an example I ran into during the project, using bluebird.

Adding bluebird

A lot of the Platform APIs that you call from your custom code is executed asynchronously. These methods return a so called 'promise'. A common use case is to chain the calls, using a '.then' clause. in that case you don't need to add bluebird to your project. However, in our project we needed to join multiple asynchronous calls, not just chain them.  

An example: 

var productIds = req.oracleMobile.database.getAll([table_name], {fields: 'id'}, httpOptions);
var orders  = req.oracleMobile.database.getAll([another_table_name, {fields: 'id,name, productId'});

var Promise = require('bluebird');

Promise.join(productIds, orders).then(
            function (result) {
                //your code here
            }
            function(error){
                //your error code here
            });

Since this uses not just ".then" but also ".join" you need to add the required library to your javascript file (var Promise = require('bluebird');)

According to the documentation you should add bluebird to your zip file, however in our case it worked by just adding the ('require') statement. There is no need to add the bluebird library to the zip.

Adding other libraries

In our current project a custom implementation is depending on a barcode generator library. You can add this to the custom code zip file that is uploaded to MCS. Note that these additional modules are not shared across custom code modules and you can't install a module that depends on a binary (executable file) on the server.

To use the modules in your code you have to add it to the dependency list in the package.json file and run npm install.


Note: when you download the scaffolding code, MCS automatically creates a package.json file. This contains the version "1.0". This results in an error message, because npm uses semantic versioning.
Running npm install on this code will result in the following errors:

npm WARN Invalid version: "1.0"
npm WARN sales No description
npm WARN sales No repository field.
npm WARN sales No README data
npm WARN sales No license field.

You can solve this by changing the version to 1.0.0. See for more details: http://semver.org/

In the rest of this blog, I assume you have setup the MCS custom code testing tools.
In that case you need to take the following steps to deploy the custom code with the additional libraries to MCS:

  1. Edit package.json
  2. Run npm install
  3. Test the code locally with mcs-ccc and mcs-test or postman (or cURL)
  4. Deploy the code with mcs-deploy

1. Edit Package.json 

In our case the package.json looks as follows:
{
  "name" : "sales",
  "version" : "1.0.0",
  "description" : "The API that facilitates ordering tickets based on a barcode.",
  "main" : "sales.js",
  "dependencies": {
    "bwip-js": "1.4.2"
  },
  "oracleMobile" : {
    "dependencies" : {
      "apis" : { },
      "connectors" : { }
    }
  }
}

2. Run npm install

Navigate to the root directory of your custom code API and install the npm modules you need in your project. In this case the module name is bwip-js. 
  npm install bwip-js
 npm install

3. Test the code locally

Make sure the code is working as expected before uploading it to MCS. This is described in a previous post.

4. Deploy the code to MCS

You can run the deployment as usual, from the MCS testing tools: 
mcs-deploy toolsConfig.json --verbose

Monday, July 17, 2017

Testing, packaging and deploying custom code using MCS custom code test tools

In a previous post I have described how to setup your MCS custom code test tools. In this post I will describe how to test, package and deploy your custom code using these tools. You should have installed the MCS custom code tool and updated the toolsConfig.json file with the correct url, mobile backend id and OAuth data.

Test your code

Once you have implemented your custom code, you want to test it. Of course you can test it by uploading the implementation into MCS. However, it is much better to test it locally and make sure it works, before you upload it to MCS. Since your custom code probably uses MCS platform APIs, it is convenient to use the mcs-ccc as a local 'container'. Note that when you run the test, it will call the platform APIs in your instance in MCS, so if you insert data in the database, it ends up in the cloud, even though you are running the code locally!

Components of the MCS testing tools and communication with MCS instance

You can either run the tests that are defined in the toolsConfig.json or you can run tests in Postman or cUrl.

When you want to run the test from Postman or cURL, simple point to localhost:4000 instead of the  MCS path. Don't forget that the mcs-ccc runs on http, not https.

The correct values for the port can be found in the output of the console when you start mcs-ccc in verbose mode.

Instead of running tests from Postman or cURL, you can also run tests that are automatically generated in toolsConfig.json:
  1. Run npm install
  2. Run mcs-ccc toolsConfig.json --verbose
  3. Open a separate command line
  4. Run mcs-test <path to toolsConfig.json> <testname> --verbose In this case mcs-test toolsConfig.json getProductsprices --verbose
  5. This returns the response in the command line
If you have a template parameter in your call, you have to hard code this in toolsConfig.json and run the test (you can spot these by looking for "PARAMETER").

Advantages of this approach are that the test is automatically generated. The disadvantage is that none of the results are validated. Last but not least, you have to hard code the parameters. For that reason we usually run real system tests in Postman. The testing tool can help the developer in the beginning, to make sure the code that is uploaded will run. It is more for 'smoke-testing' than for real testing of your code.

Package and deploy your code

Once you are ready to package and deploy your code to MCS, you can use mcs-deploy:

  1. Navigate to the package
  2. Run  mcs-deploy toolsConfig.json --verbose
  3. Enter the username and password when prompted
You should see the implementation in the MCS API implementation page.
sales 1.0 added as default implementation after running mcs-deploy