Wednesday, December 5, 2018

Node.js musings part I: node version management

I started programming at a very young age, when I was twelve. My first programming language was Basic. I wrote a program that would generate random calculation problems in different categories (under 10, under 100). If the answer given by the user was incorrect, a picture based on 10 x 10 squares would show the correct answer visually.
It was actually used by a remedial teacher (my mom 😉) on our home computer.

Since then I learned a language comparable to Pascal, I used LISP in university, took classes in C, C++, MFC, Visual Basic and Java.

The next software I wrote was after I graduated, for a marketing department. They showed calculations of how much water a water closet or shower would use in different languages. I wrote it in Visual Basic. That was the first time I learned about the hassle of versions in real life.
The program ran fine on my Windows machine (I think it was Windows 95). But on the sales persons machine it would not print. The library was incompatible with my compiled code. I was in 'DLL hell' 😱 as my professor used to call it.

I stayed away from procedural language and decided to focus on Java and JEE. I loved it! No issues with native libraries, object orientation to structure your code, automated builds, test frameworks etc etc.
Of course, I was involved in multiple projects. Some projects would support Java version 1.4 others 1.5 etc.  First I started having different versions on my machine and updating environment variables, but quickly I would need application servers etc as well, so I started creating Virtual Images for Virtual Box and have different versions of the JDK and application versions there.At least I had no issues with native libraries, like I remembered from Visual Basic! It was not ideal, but I managed and felt in control.

Today we are doing more and more projects in the (Oracle) Cloud. We are doing projects in Oracle SOA CS, Oracle Integration Cloud Service, Oracle API Platform Cloud Service and Oracle Mobile Cloud Service.
Oracle Mobile Cloud Service is expecting node.js code for the mobile backend functions that you write. This meant I had to learn JavaScript. Not everything was easy from the start: The asynchronous nature of Node was something I definitely had to get used to.  asynch/await to the rescue ;)
The beauty: no more application servers, no more multiple VMs for different versions: just code! So far so good. :)

Then, we moved to a different version of the cloud service and upgraded our node version to version 8.11.3. and happily start using the new version. I think right now we are on version 11. Of course this does not have a happy end: a week ago I started investigating Oracle BlockChain Cloud Service. It expects node version 6 😰. And now I am back in version hell: node.js uses specific native libraries under the covers, that of course are not the same between the different versions. I need to be able to switch between versions. Some projects expect the paths to be pointing to the right versions and of course constructs like await are not supported, so running my code in Netbeans becomes complicated...

nvm-win to the rescue

I was considering creating images for Virtual Box again (also because I still like bash better than windows Powershell), but I decided to research the topic a little bit. I stumbled upon this project: nvm-windows

It look really good: I can switch versions without having to fiddle around with the path or environments myself, it is all managed by this package that is written in go (another language I might want to learn, but one thing at the time ;) )

Here is how to install it on Windows:
  1. Uninstall node from your local machine. Remove all folders related to it C:\Users\{username}\AppData\Roamding\npm and C:\Users\{username}\AppData\Roamding\npm-cache
  2. Download nvm-setup.zip
  3. Unzip it
  4. Run nvm-setup and accept the defaults
  5. Open a new Powershell window and type nvm
  6. It will show the version (1.1.7 in my case)
  7. Install the versions of node you need. In my case
    1. nvm install 6
    2. nvm install 8
    3. nvm install latest
You can now list your versions, and use the one you would like to use by typing
  • nvm list
  • nvm use 6.15.1
I am not sure I love Node.js and javascript as much (yet) as I love Java, but it might happen in the future ....

Any comments and pointers on how you deal with versions of node are much appreciated, please feel free to comment on this blog or tweet them!

Happy coding 😏

Monday, December 3, 2018

Setting up fn on Kubernetes in the Oracle Cloud (OKE)

In this post I will briefly describe how you can install fn on Kubernetes in the Oracle Cloud (OKE).

Prerequisites

  1. You need to have access to Oracle Cloud Infrastructure (OCI) with a unique account. If you have a federated account, you should create another one to be able to create a kubernetes cluster. This account must have either Administrator privileges or belong to a group to which a policy grants the appropriate Container Engine for Kubernetes permissions. Since my example is for R&D purposes, I will be using an account with Administrator privileges.
  2. Install and configure the OCI CLI, generate an API signing key pair and add that public key to your username. 
  3. Create a separate compartment for the cluster and make sure your compartment has the necessary resources and your root compartments needs the policy Allow service OKE to manage all-resources in tenancy 
  4. Install and setup kubectl on Windowns (tip: use Chocolatey)
  5. Install helm

Create the Kubernetes cluster

  1. Navigate to your console
  2. Under Developer Services, select "clusters"
  3. Pick the compartment that you created or want to use
  4. Click on "Create Cluster" and accept all the defaults
  5. Click "Create"
  6. Inspect the details 
  7. Download the kubeconfig file and store it in your .kube directory
If it fails because of service limits, raise an SR with Oracle support to get your limits increased.

Install fn 

  1. Make sure your server version of helm is up to date (helm init --upgrade)
  2. Create a directory fnhelm
  3. Clone the git repo: git clone https://github.com/fnproject/fn-helm.git
  4. Install the chart dependencies: helm dep build fn
  5. Install it in your cluster: helm install --name my-release fn

Deploy your function to the cluster

  1. Make sure you are logged in to your dockerhub account
  2. Set the fn context to the right value
  3. Deploy the function to the cluster

The fn context

When you use fn deploy, it uses a context to determine where to deploy to, and what registry to use. The URL for the fnserver you deployed to the kubernetes cluster  can be found by issuing the following command

  • $(kubectl get svc --namespace default my-release-fn-api -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80
However, you can also look it up using a kubectl proxy: 
  • kubectl proxy 
  • Navigate to a browser: https://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#
  • Select "Services" in the navigation menu at the left


  • Click on the External Endpoint of the my-release-fn-api 
  • You should see the following:
Now we can create the context by running the following commands:
  • fn list contexts
  • fn create context ociLonneke --api-url [your url] --provider default --registry [your_dockerhub_username]
  • fn list contexts

Deploy the function to the cluster

Finally! We are ready to deploy our first function to our cluster. In this case I use the Java tutorial example. 
  • fn init --runtime java --trigger http javafn
  • fn build
  • fn deploy --app java-app
This will result in the following output (or something similar)

Successfully created app:  java-app
Successfully created function: javafn with lonneke/javafn:0.0.2
Successfully created trigger: javafn-trigger
Trigger Endpoint: http://xxx.xx.xx.xx:80/t/java-app/javafn-trigger

Test the function

Open a browser and type the url as stated by the trigger endpoint. You should see the following result:

Next steps

Now you have your kubernetes cluster up and running, you can move your functions from your local installation to the cluster and start using them in production!

Happy coding 😊














Tuesday, October 23, 2018

Even Aces make mistakes: Reuse

Every year Debrah Lilley is kind enough to round up a couple of Oracle Aces to talk about projects, and/or new features at Oracle Open World.
This year we present a session with 9(!) people about our 'favorite' mistake we made and what we learned from it.

From the summary:
"A quick diversion from our now traditional featured short talks, this year our EMEA ACE Directors will share their biggest errors and what they learned from it. A fast, fun session that will energize you on your Oracle OpenWorld 2018 experience."

I have five minutes to talk about my favorite mistake, so this blog post elaborates on the topic a bit more.

Reuse

In software, we like to reuse code. This enhances productivity, it minimizes the chance of introducing new mistakes and gives us a chance to focus on the new things we are trying to accomplish, instead of reinventing the wheel. This has been a very strong driver of Service Oriented Architecture (SOA). 
There are multiple ways of implementing or realizing reuse:
  • create a runtime component that you call from your code to do the job ('a service')
  • create a library or code that you share from a central location and import in your code ('a common artifact in SOA Suite in MDS, or a library in Java or Node.js)
  • a pattern ('template') that you apply as a best practice 
  • a copy (!?) of the code you want to reuse and adapt. 
One of the downsides of reuse, is flexibility loss. If you are reusing something, a change to it will impact all the consumer or users of that piece of code. service etc. This is why micro services architecture is all about bounded contexts and decoupling using events. 

In SOA there has been a very common misconception: the fact that reuse is good, no matter what, resulting in inflexible very complex code that is expensive to change and, in the end, hard to reuse (!)
This particularly happens a lot with canonical models, and yes, I made that mistake too.

Let's go back in time, to a project where we created an application for the province of Overijssel for permits and grants, using Oracle SOA Suite 11g, a content management. The solution basically consisted of 4 different composites (Apply, Process, Decide, Notify), the Oracle Servicebus, a content management system, SAP ERP, a permit application and a.NET user interface.

For the integration between the .NET interface and the BPEL process support, we designed a canonical model.

Canonical model side note


The picture below shows what a canonical mode is supposed to do: make sure that in the servicebus and between applications, a canonical model is used.


End of side note


However, we decided to use the canonical model inside of our BPELs as well. This had several advantages:
  1. we did not have to design the WSDL and XML Schemas for the BPEL
  2. we could assign the entire message to the BPEL input and output variables

Change

We were done very fast, but we found we needed a number of changes in the canonical model to facilitate the .NET application. They wanted different structures and different fields. None of these had any functional impact on process flow, since the BPEL was mostly driving the process, adding approvals, enrichments and fetching and storing data in the correct backend system.
However, because we used the canonical model in all of our BPELs, we had to redo all of our assignments and xslts to pick up this new version, making sure nothing changed in the flow and functionality of the component.
The change cost the same amount of time as the inital realization, because the work in BPEL is not in clicking together the scopes, but in building the logic in the assignments and xslts.

Lessons learned

We learned a number of things from this particular approach:

  1. Separation of concerns matters. This has been a long standing IT and computer science principle. See for example (https://en.wikipedia.org/wiki/Separation_of_concerns). There is a reason you put different groups in your electrical wiring in your house: if one of the groups fails, the other parts of the house are unaffected. When you have to make a change for a specific purpose, you should have to make that change in one place only. In case of the canonical model, in the servicebus and the components that are calling the servicebus.
  2. Reuse has an impact on changeability and maintainability. To make sure things can be changed, apply the tolerant reader pattern ((https://martinfowler.com/bliki/TolerantReader.html)
  3. Pick your reuse pattern carefully: a pattern you want to reuse (or a template), a library that you will apply, a service that you want to call, or creating a copy to be quick, but that you will be able to change independently.
  4. Don't send data to systems that don't care about it. In this case were were sending data and dragging it along in the BPEL that were not needed in the BPEL. Typically all you need in the BPEL are identifiers and dates. Then, when you need more data in a specific scope, you can fetch it from the appropriate source. 
We refactored the code and made specific WSDLs and XSDs for all BPELs and used the canonical model in the OSB, where it belongs. 
It became a lot easier to work with the messages in the BPEL and changes in the UI had no or little impact on the process, as they should!

Code smells

So how do you know if you are reusing the canonical model too much. These are some tell tales that point in that direction:

SOA Suite

  1. You have to redeploy 'MDS' everytime a new feature is defined
  2. All your artefacts (WSDLs, XSDs, DVMs) are in MDS, there are no 'local composite' artefact
  3. You have a lot of data in your messages in BPEL that are in none of the assigns or in only one of the assigns
  4. You have a lot of merge conflicts with other teams that are working on completely different topics
  5. You have a if statements in your BPEL that never merge back into 1 common flow, based on something in your canonical model

Java, Javascript

  1. Your ORM library gets updated all the time and your code needs to change because of it.
  2. You filter out values and fields in most of your code
  3. You are extending the common objects to add your own behavior and attributes and filter out the common behavior. 

Conclusion

Reuse is a powerful and important topic to be productive and to avoid mistakes. However, reuse is not a goal, it is a means to an end.  Modern systems need to be changeable in the first place. Reuse should be carried out within the context as appropriate. Global models should be treated carefully and be minimized to where they add value and are needed. 
We can all learn from domain driven design principles and microservices architectures in our other architectures as well to make sure we don't paint ourselves into a corner!

Happy coding 😎

Sunday, December 10, 2017

Troubleshooting Oracle API Platform Cloud Service

One of the challenges when working in integration is troubleshooting. This becomes even more challenging with when you start using a new product.

Recently I worked with Oracle Product management (Thank you Darko and Lohit) to troubleshoot issues with an OAuth configuration of APIs in Oracle API Platform Cloud Service.

Setup

The setup was as follows:
  1. An API Gateway node deployed to Oracle Compute Cloud Classic as an infrastructure provider
  2. Oracle Identity Management Cloud Service in the role of OAuth provider
We setup an API with several policies, including OAuth for security. When we called the service, it gave us a '401 unauthorized' error.

Oracle API Platform Cloud Service troubleshooting

The Oracle API Platform Service offers analytics for each API. You can navigate there by opening the API Platform Management portal, click on the API you want to troubleshoot and click on the Analytics tab (this is the bottom tab).

Click on Errors and Rejections, after setting the period you are interested in. Usually when you are troubleshooting, you would like to see the last hour.

Different type of analytics in an API











Now you can scroll down to error distribution and see the errors that occurred. In this case, because I selected "Last Week" you see a number of different errors that occurred last week and how often they occurred. When you run your test again, you will see one of the errors in the distribution increase, giving you insight in the type of error.

Distribution of each error type









We tried different configurations, as you can see from the distribution, the graph tells us that the OAuth token was invalid and that in another case we had a bad JWT key. This mean we had to take a look at the configuration of the OAuth profile of the Oracle API Gateway Node. (see the documentation on how to configure Oracle Identity Cloud Service as OAuth provider).

OAuth token troubleshooting 

We had a token, but it appeared to be invalid. It is hard to troubleshoot security: what is wrong with our configuration? Why are we getting the erors that we get? When you successfully obtain an OAuth token, you can inspect it with JSON Web Toolkit Debugger. 
  1. Navigate to https://jwt.io
  2. Click on Debugger
  3. Paste the token in the window at the left hand side
JWT debugger with default token example










The debugger shows you a header, the payload and the signature.

Header 
Algorithm that is used, for example SHA256 and types supported (JWT for example)
Payload 
The claim is different per type. There are three types: public, private or registered. A registered claim contains fields like iss (issuer, in this case https://identity.oraclecloud.com/ ), sub (subject), aud (audience) etc. See for more information: https://tools.ietf.org/html/rfc7519#section-4.1
Signature
The signature of the token, to make sure nobody tampered with it.

Now you can compare that to what you have put in the configuration of Oracle Identity Management Cloud and the configuration of the Oracle API Gateway Node.


Oracle API Platform Gateway Node trouble shooting

Apart from looking at the token and the analytics it can help to look at the log files on the gateway node. The gateway node is an Oracle WebLogic Server with some applications installed on it.

There are several log files you can access.
  1. apics/logs. In this directory you find the apics.log file. It contains stacktraces and other information that help you troubleshoot the API.
  2. apics/customlogs. If you configured a custom policy in your API, the logfiles will be stored in this directory. You can log the content of objects that are passed in this API. See the documentation about using Groovy in your policies for information about the variables that you can use. 
  3. 'Regular' Managed server logs. If something goes wrong with the connection to the Derby database, or other issues occur that have to do with the infrastructure, you can find the information in /servers/managedServer1/logs directory.

Summary

When troubleshooting APIs that you have configured in Oracle API Platform cloud service you can use the following tools:
  • jwt.io Debugger. This tool lets you inspect OAuth tokens generated by a provider.
  • Oracle API Platform Cloud Service Analytics. Shows the type of error.
  • Oracle API Platform logging policies you put on the API. Lets you log the content of objects. 
  • Log files in the API Gateway node:
    • {domain}/apics/logs for the logs of the gateway node. Contains stracktraces etc
    • {domain}/apics/customlogs for any custom logs you entered in the api
    • {domain}/servers/managedServer1/trace for default.log of the managed server
 Happy coding!

Thursday, November 30, 2017

Sharing Applications within your team in VBCS, PCS and MCS

Oracle Visual Builder Cloud Service is a so called 'low code' platform to build user interfaces in the cloud, without the need for a IDE or installing and deploying to servers.

This can be done by a single user, but this can also be used in a team setting.

Unfortunately it is not very intuitive how you can share different applications within a team. By default, when you create an application, only the person who created it will see it.

This is different than in other cloud products like MCS and PCS. In MCS you can see all mobile backends, APIs etc once you have the role 'team member'.  In MAX applications are visible by anyone with the role MobileEnvironment_Develop

In PCS you can create spaces to allow sharing of applications.


Oracle Product Shared
Process Cloud Service (PCS) Either private or shared in a space
Mobile Cloud Service (MCS) To users with role teammember
Mobile Accellerator (MAX) User that have the role MobileEnvironment_Develop
Visual Builder Cloud Service Explicit assignment to users with role teammember

How to share your VBCS application

  • Go to the "Home" tab 
  • Flip the card of your application 
  • Put the name of the team member (only existing teammembers show up) 
  • Click + 


 The application is now visible for those team members you added.

Conclusion 

The feature is not hard to understand but hard to find. It is not well documented and the concept is implemented differently in different (related) PAAS products.


Happy coding configuring 😉

PS: in the new documenation of VBCS I found a chapter describing this feature, it is improved faster than I can blog! 

Monday, October 2, 2017

My First Blockchain in the Oracle Cloud

Being an integration person I am very interested in Blockchain architecture as an alternative for 'traditional' integration options like 'Enterprise SOA' and Centralized Master Data Management.

This blog will not discuss the architecture of blockchain, but describes how you can get started as a developer with blockchain using Oracle Cloud.

It assumes you have an instance of Oracle Compute Cloud at your disposal and describes how to get started wth MultiChain.

I executed the following steps:
  1. Create two instances of Oracle Compute Cloud
  2. Install MultiChain
  3. Create the first chain

Create two instances of Oracle Compute Cloud

To show the concept of the block chain you need to create at least two nodes. In this case I created two instances of Oracle Compute Cloud Classic. 

  1. Login to your identity domain: cloud.oracle.com
  2. On the Dashboard, select "Create Instance" and pick "Compute Classic"
  3. This gives you an overview as shown in the figure below. First you need to save the private key to be able to access the instance with ssh. 
    1. Type a name for the key
    2. Save it on your computer
  4. Select "Customize"
  5. Finish the Wizard
    1. Select the image "OL_7.2_UEKR4_x86_64"
    2. Select a general purpose shape, unless you are planning to build a real blockchain, then it is recommended to create a High Memory Shape
    3. Add the key you saved to the instance, or upload your own key
    4. For network leave the defaults
    5. Leave defaults in the storage tab 
  6. You can see the result in the orchestrations tab
Orchestration showing the instances that are created












In order to create a chain and have multiple nodes that are aware of each other, you need to make sure the instances can communicate.

To enable this, you need to add a Security Application and a Security Rule for the instances. (Many thanks to Simon Haslam who helped me figure this out!) :

  1. Click on the Network tab in my instances
  2. Select Shared Network
  3. Select Security Application
    1. Create a security Application for te default port
      1. Name: TCP2671
      2. Protocol: TCP
      3. Port start: 2671
      4. Port end: 2671
      5. Description: default Multichain port
    2. Create a security application for the RPC port
      1. Name: TCP2670
      2. Protocol: TCP
      3. Port start: 2670
      4. Port end: 2670
      5. Description: default RPC multichain port
  4. Select Security Rules
  5. Click Add Security Rule
  6. Fill in the fields
    1. TCP2671 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)
  7. Add another security rule for the default RPC port
    1. TCP2670 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)




Now you are ready to create the second instance. Note that you don't have to set the security rules for new instances, because they are pointing to the default security list in this case. The new instances are added to that automatically. This is of course not something you would do in real life, but for My First Blockchain in the Oracle Cloud it works fine :) 


Install MultiChain

You need to install MultiChain on all your instances. 

To connect you need a ssh client. I am currently working on a Windows 10 machine, so I use PuTTY. The key that is downloaded from the cloud can't be used directly in PuTTY, you need to import it. 
  1. Open PuTTYgen
  2. Click import key
  3. Browse to the key you just saved to your computer
  4. Click on "Save Private Key"
  5. Open PuTTY
  6. Enter the details of the instance as shown below and save the session
    1. Put in the public IP address of the instance
    2. Point to the SSH private key in the Auth tab
  7. Open a session
  8. Install wget: sudo yum install wget -y
  9. Run the following in the terminal:

cd /tmp
wget https://www.multichain.com/download/multichain-1.0.1.tar.gz
tar -xvzf multichain-1.0.1.tar.gz
cd multichain-1.0.1
sudo mv multichaind multichain-cli multichain-util /usr/local/bin
Repeat this for the second instance and if you are testing with more than one, for all the instances you want to include.

Creating my first chain

Now you are ready to start the tutorial to get acquainted with MultiChain.

In order to keep track of the different servers, I have given the nodes different colors in my PuTTY sessions.

  1. Connect to node 1 and execute the commands that are referenced as 'server1'  in the tutorial in this window
  2. Connect to node 2 and execute the commands that are references as 'server2' in the tutorial in this window
Note that I need to open two windows of PuTTY for server 2: One to start the node:

multichaind chain1@[ip-address]:[port]

And one to start the multichain-cli chain1 interaction. If you don't start the node, you will get an error:

MultiChain 1.0.1 RPC Client


Interactive mode
chain1: getinfo
error: couldn't connect to server
With my two windows open, the interactive client was working like a charm... 


Three windows testing MultiChain from my windows machine















Happy coding!

Thursday, September 7, 2017

Oracle Mobile Cloud Service: team members accessing APIs

Oracle Mobile Cloud Service is setup around different personas. When you login to MCS you see a list of roles.


The Mobile App developer is the developer that is creating a mobile application and is using APIs that are exposed in MCS. The Service developer is the developer that creates the APIs, connectors etc.

This distinction is very useful: it helps in making sure the documentation is targeted to the right people and the content is organized in a way that makes sense for these different personas.

Sometimes, however, it does not work as well. As I discussed in an earlier post: the command line tools to deploy and test services is hidden in software that is targeted at mobile app developers.  In this post I want to discuss another use case that is not that obvious apparently: a service developer calling an API.

In our project we are creating APIs for several different mobile app developers. Before we publish an API and a mobile backend we want to test this mobile backend. We secured the APIs with security roles, because we want to make sure that APIs for internal use are not accidently exposed to external companies.  To be able to test the API, we had to assign the correct role to the team member. This is when the challenge started.

Assigning roles to users in MCS
There are two ways of assigning roles to users for MCS. The first method is using the MCS Mobile User Management, which can be accessed using the mobile portal ui:

Manage users and roles from the MCS portal
















The second way is using the services dashboard, using the "Users" tab and clicking "manage roles" for a specific user:
Manage roles of a user using the "users tab" from the service dashboard


















But before you can assign the role, you have to go to the user management part of the MCS and create a role in a realm. This is described in the documentation of MCS: "Set Up Mobile Users, Realms and Roles". It also describes naming conventions, for example for roles: 

"The naming convention for Oracle Cloud custom roles that represent MCS mobile user roles is: {serviceName}_MobileEnvironment_{rolename}. For example, for a role named “APIRole” in the environment with service name “poeo342ed” the custom role in Oracle Cloud would be poeo342ed_MobileEnvironment_APIRole." 

Then you can create a new user account for the tester and assign the newly created role to this user. (S)he then uses account after logging into MCS and changing the default password to test the API from postman, adding the username and password as Basic Authentication to the header of each call.

This all works very nicely. But we wanted to assign this role to ourselves. However, the users that were set up as team members did not show up in the MCS User management list of users. 

So I navigated to my services dashboard again and clicked on the tab "custom roles". Sure enough the role I just created was there, so I assigned my own user to the role and added my username / password to the Postman API call.
The result: "Unauthorized"

Solution

Users should not only be assigned to a role to access an API, they should also be part of the realm that is associated with the mobile backend. Unless, of course you use anonymous access, which does not work when you are securing your APIs with roles.
In our example we did not explicitly create a realm, so the mobile backend was associated to the Default Realm. We created the new tester in the default Realm, so this 'role' was automatically assigned to this user. The team members were not added to that realm and therefore did not show up in the list of users of the default realm.

We added the team members to the Default 'role', which corresponds to the realm in the mobile user management instance and voila! We can access the APIs :)

Role that represents the default realm in the mobile cloud service instance








Happy coding!