Sunday, December 10, 2017

Troubleshooting Oracle API Platform Cloud Service

One of the challenges when working in integration is troubleshooting. This becomes even more challenging with when you start using a new product.

Recently I worked with Oracle Product management (Thank you Darko and Lohit) to troubleshoot issues with an OAuth configuration of APIs in Oracle API Platform Cloud Service.

Setup

The setup was as follows:
  1. An API Gateway node deployed to Oracle Compute Cloud Classic as an infrastructure provider
  2. Oracle Identity Management Cloud Service in the role of OAuth provider
We setup an API with several policies, including OAuth for security. When we called the service, it gave us a '401 unauthorized' error.

Oracle API Platform Cloud Service troubleshooting

The Oracle API Platform Service offers analytics for each API. You can navigate there by opening the API Platform Management portal, click on the API you want to troubleshoot and click on the Analytics tab (this is the bottom tab).

Click on Errors and Rejections, after setting the period you are interested in. Usually when you are troubleshooting, you would like to see the last hour.

Different type of analytics in an API











Now you can scroll down to error distribution and see the errors that occurred. In this case, because I selected "Last Week" you see a number of different errors that occurred last week and how often they occurred. When you run your test again, you will see one of the errors in the distribution increase, giving you insight in the type of error.

Distribution of each error type









We tried different configurations, as you can see from the distribution, the graph tells us that the OAuth token was invalid and that in another case we had a bad JWT key. This mean we had to take a look at the configuration of the OAuth profile of the Oracle API Gateway Node. (see the documentation on how to configure Oracle Identity Cloud Service as OAuth provider).

OAuth token troubleshooting 

We had a token, but it appeared to be invalid. It is hard to troubleshoot security: what is wrong with our configuration? Why are we getting the erors that we get? When you successfully obtain an OAuth token, you can inspect it with JSON Web Toolkit Debugger. 
  1. Navigate to https://jwt.io
  2. Click on Debugger
  3. Paste the token in the window at the left hand side
JWT debugger with default token example










The debugger shows you a header, the payload and the signature.

Header 
Algorithm that is used, for example SHA256 and types supported (JWT for example)
Payload 
The claim is different per type. There are three types: public, private or registered. A registered claim contains fields like iss (issuer, in this case https://identity.oraclecloud.com/ ), sub (subject), aud (audience) etc. See for more information: https://tools.ietf.org/html/rfc7519#section-4.1
Signature
The signature of the token, to make sure nobody tampered with it.

Now you can compare that to what you have put in the configuration of Oracle Identity Management Cloud and the configuration of the Oracle API Gateway Node.


Oracle API Platform Gateway Node trouble shooting

Apart from looking at the token and the analytics it can help to look at the log files on the gateway node. The gateway node is an Oracle WebLogic Server with some applications installed on it.

There are several log files you can access.
  1. apics/logs. In this directory you find the apics.log file. It contains stacktraces and other information that help you troubleshoot the API.
  2. apics/customlogs. If you configured a custom policy in your API, the logfiles will be stored in this directory. You can log the content of objects that are passed in this API. See the documentation about using Groovy in your policies for information about the variables that you can use. 
  3. 'Regular' Managed server logs. If something goes wrong with the connection to the Derby database, or other issues occur that have to do with the infrastructure, you can find the information in /servers/managedServer1/logs directory.

Summary

When troubleshooting APIs that you have configured in Oracle API Platform cloud service you can use the following tools:
  • jwt.io Debugger. This tool lets you inspect OAuth tokens generated by a provider.
  • Oracle API Platform Cloud Service Analytics. Shows the type of error.
  • Oracle API Platform logging policies you put on the API. Lets you log the content of objects. 
  • Log files in the API Gateway node:
    • {domain}/apics/logs for the logs of the gateway node. Contains stracktraces etc
    • {domain}/apics/customlogs for any custom logs you entered in the api
    • {domain}/servers/managedServer1/trace for default.log of the managed server
 Happy coding!

Thursday, November 30, 2017

Sharing Applications within your team in VBCS, PCS and MCS

Oracle Visual Builder Cloud Service is a so called 'low code' platform to build user interfaces in the cloud, without the need for a IDE or installing and deploying to servers.

This can be done by a single user, but this can also be used in a team setting.

Unfortunately it is not very intuitive how you can share different applications within a team. By default, when you create an application, only the person who created it will see it.

This is different than in other cloud products like MCS and PCS. In MCS you can see all mobile backends, APIs etc once you have the role 'team member'.  In MAX applications are visible by anyone with the role MobileEnvironment_Develop

In PCS you can create spaces to allow sharing of applications.


Oracle Product Shared
Process Cloud Service (PCS) Either private or shared in a space
Mobile Cloud Service (MCS) To users with role teammember
Mobile Accellerator (MAX) User that have the role MobileEnvironment_Develop
Visual Builder Cloud Service Explicit assignment to users with role teammember

How to share your VBCS application

  • Go to the "Home" tab 
  • Flip the card of your application 
  • Put the name of the team member (only existing teammembers show up) 
  • Click + 


 The application is now visible for those team members you added.

Conclusion 

The feature is not hard to understand but hard to find. It is not well documented and the concept is implemented differently in different (related) PAAS products.


Happy coding configuring 😉

PS: in the new documenation of VBCS I found a chapter describing this feature, it is improved faster than I can blog! 

Monday, October 2, 2017

My First Blockchain in the Oracle Cloud

Being an integration person I am very interested in Blockchain architecture as an alternative for 'traditional' integration options like 'Enterprise SOA' and Centralized Master Data Management.

This blog will not discuss the architecture of blockchain, but describes how you can get started as a developer with blockchain using Oracle Cloud.

It assumes you have an instance of Oracle Compute Cloud at your disposal and describes how to get started wth MultiChain.

I executed the following steps:
  1. Create two instances of Oracle Compute Cloud
  2. Install MultiChain
  3. Create the first chain

Create two instances of Oracle Compute Cloud

To show the concept of the block chain you need to create at least two nodes. In this case I created two instances of Oracle Compute Cloud Classic. 

  1. Login to your identity domain: cloud.oracle.com
  2. On the Dashboard, select "Create Instance" and pick "Compute Classic"
  3. This gives you an overview as shown in the figure below. First you need to save the private key to be able to access the instance with ssh. 
    1. Type a name for the key
    2. Save it on your computer
  4. Select "Customize"
  5. Finish the Wizard
    1. Select the image "OL_7.2_UEKR4_x86_64"
    2. Select a general purpose shape, unless you are planning to build a real blockchain, then it is recommended to create a High Memory Shape
    3. Add the key you saved to the instance, or upload your own key
    4. For network leave the defaults
    5. Leave defaults in the storage tab 
  6. You can see the result in the orchestrations tab
Orchestration showing the instances that are created












In order to create a chain and have multiple nodes that are aware of each other, you need to make sure the instances can communicate.

To enable this, you need to add a Security Application and a Security Rule for the instances. (Many thanks to Simon Haslam who helped me figure this out!) :

  1. Click on the Network tab in my instances
  2. Select Shared Network
  3. Select Security Application
    1. Create a security Application for te default port
      1. Name: TCP2671
      2. Protocol: TCP
      3. Port start: 2671
      4. Port end: 2671
      5. Description: default Multichain port
    2. Create a security application for the RPC port
      1. Name: TCP2670
      2. Protocol: TCP
      3. Port start: 2670
      4. Port end: 2670
      5. Description: default RPC multichain port
  4. Select Security Rules
  5. Click Add Security Rule
  6. Fill in the fields
    1. TCP2671 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)
  7. Add another security rule for the default RPC port
    1. TCP2670 for security application
    2. Source: security ip list (public-internet)
    3. Destination: security list (default)




Now you are ready to create the second instance. Note that you don't have to set the security rules for new instances, because they are pointing to the default security list in this case. The new instances are added to that automatically. This is of course not something you would do in real life, but for My First Blockchain in the Oracle Cloud it works fine :) 


Install MultiChain

You need to install MultiChain on all your instances. 

To connect you need a ssh client. I am currently working on a Windows 10 machine, so I use PuTTY. The key that is downloaded from the cloud can't be used directly in PuTTY, you need to import it. 
  1. Open PuTTYgen
  2. Click import key
  3. Browse to the key you just saved to your computer
  4. Click on "Save Private Key"
  5. Open PuTTY
  6. Enter the details of the instance as shown below and save the session
    1. Put in the public IP address of the instance
    2. Point to the SSH private key in the Auth tab
  7. Open a session
  8. Install wget: sudo yum install wget -y
  9. Run the following in the terminal:

cd /tmp
wget https://www.multichain.com/download/multichain-1.0.1.tar.gz
tar -xvzf multichain-1.0.1.tar.gz
cd multichain-1.0.1
sudo mv multichaind multichain-cli multichain-util /usr/local/bin
Repeat this for the second instance and if you are testing with more than one, for all the instances you want to include.

Creating my first chain

Now you are ready to start the tutorial to get acquainted with MultiChain.

In order to keep track of the different servers, I have given the nodes different colors in my PuTTY sessions.

  1. Connect to node 1 and execute the commands that are referenced as 'server1'  in the tutorial in this window
  2. Connect to node 2 and execute the commands that are references as 'server2' in the tutorial in this window
Note that I need to open two windows of PuTTY for server 2: One to start the node:

multichaind chain1@[ip-address]:[port]

And one to start the multichain-cli chain1 interaction. If you don't start the node, you will get an error:

MultiChain 1.0.1 RPC Client


Interactive mode
chain1: getinfo
error: couldn't connect to server
With my two windows open, the interactive client was working like a charm... 


Three windows testing MultiChain from my windows machine















Happy coding!

Thursday, September 7, 2017

Oracle Mobile Cloud Service: team members accessing APIs

Oracle Mobile Cloud Service is setup around different personas. When you login to MCS you see a list of roles.


The Mobile App developer is the developer that is creating a mobile application and is using APIs that are exposed in MCS. The Service developer is the developer that creates the APIs, connectors etc.

This distinction is very useful: it helps in making sure the documentation is targeted to the right people and the content is organized in a way that makes sense for these different personas.

Sometimes, however, it does not work as well. As I discussed in an earlier post: the command line tools to deploy and test services is hidden in software that is targeted at mobile app developers.  In this post I want to discuss another use case that is not that obvious apparently: a service developer calling an API.

In our project we are creating APIs for several different mobile app developers. Before we publish an API and a mobile backend we want to test this mobile backend. We secured the APIs with security roles, because we want to make sure that APIs for internal use are not accidently exposed to external companies.  To be able to test the API, we had to assign the correct role to the team member. This is when the challenge started.

Assigning roles to users in MCS
There are two ways of assigning roles to users for MCS. The first method is using the MCS Mobile User Management, which can be accessed using the mobile portal ui:

Manage users and roles from the MCS portal
















The second way is using the services dashboard, using the "Users" tab and clicking "manage roles" for a specific user:
Manage roles of a user using the "users tab" from the service dashboard


















But before you can assign the role, you have to go to the user management part of the MCS and create a role in a realm. This is described in the documentation of MCS: "Set Up Mobile Users, Realms and Roles". It also describes naming conventions, for example for roles: 

"The naming convention for Oracle Cloud custom roles that represent MCS mobile user roles is: {serviceName}_MobileEnvironment_{rolename}. For example, for a role named “APIRole” in the environment with service name “poeo342ed” the custom role in Oracle Cloud would be poeo342ed_MobileEnvironment_APIRole." 

Then you can create a new user account for the tester and assign the newly created role to this user. (S)he then uses account after logging into MCS and changing the default password to test the API from postman, adding the username and password as Basic Authentication to the header of each call.

This all works very nicely. But we wanted to assign this role to ourselves. However, the users that were set up as team members did not show up in the MCS User management list of users. 

So I navigated to my services dashboard again and clicked on the tab "custom roles". Sure enough the role I just created was there, so I assigned my own user to the role and added my username / password to the Postman API call.
The result: "Unauthorized"

Solution

Users should not only be assigned to a role to access an API, they should also be part of the realm that is associated with the mobile backend. Unless, of course you use anonymous access, which does not work when you are securing your APIs with roles.
In our example we did not explicitly create a realm, so the mobile backend was associated to the Default Realm. We created the new tester in the default Realm, so this 'role' was automatically assigned to this user. The team members were not added to that realm and therefore did not show up in the list of users of the default realm.

We added the team members to the Default 'role', which corresponds to the realm in the mobile user management instance and voila! We can access the APIs :)

Role that represents the default realm in the mobile cloud service instance








Happy coding!

Sunday, July 23, 2017

MCS implementation: How to deploy your custom code with additional libraries

When implementing APIs that you have defined in Oracle Mobile Cloud Service (MCS), you don't want to reinvent the wheel.  That is why it is important to know how to deploy other libraries with your custom code implementation.

The custom code service that you use when implementing your APIs is backed by the following libraries:

  • Node
  • Request
  • Express
  • Bluebird
  • Custom code SDK
The internal code of MCS uses these libraries. These libraries (with the exception of the '.then' construct from bluebird) are not available for you in your custom code, unless you add them to your custom code zip file. So let's look at an example I ran into during the project, using bluebird.

Adding bluebird

A lot of the Platform APIs that you call from your custom code is executed asynchronously. These methods return a so called 'promise'. A common use case is to chain the calls, using a '.then' clause. in that case you don't need to add bluebird to your project. However, in our project we needed to join multiple asynchronous calls, not just chain them.  

An example: 

var productIds = req.oracleMobile.database.getAll([table_name], {fields: 'id'}, httpOptions);
var orders  = req.oracleMobile.database.getAll([another_table_name, {fields: 'id,name, productId'});

var Promise = require('bluebird');

Promise.join(productIds, orders).then(
            function (result) {
                //your code here
            }
            function(error){
                //your error code here
            });

Since this uses not just ".then" but also ".join" you need to add the required library to your javascript file (var Promise = require('bluebird');)

According to the documentation you should add bluebird to your zip file, however in our case it worked by just adding the ('require') statement. There is no need to add the bluebird library to the zip.

Adding other libraries

In our current project a custom implementation is depending on a barcode generator library. You can add this to the custom code zip file that is uploaded to MCS. Note that these additional modules are not shared across custom code modules and you can't install a module that depends on a binary (executable file) on the server.

To use the modules in your code you have to add it to the dependency list in the package.json file and run npm install.


Note: when you download the scaffolding code, MCS automatically creates a package.json file. This contains the version "1.0". This results in an error message, because npm uses semantic versioning.
Running npm install on this code will result in the following errors:

npm WARN Invalid version: "1.0"
npm WARN sales No description
npm WARN sales No repository field.
npm WARN sales No README data
npm WARN sales No license field.

You can solve this by changing the version to 1.0.0. See for more details: http://semver.org/

In the rest of this blog, I assume you have setup the MCS custom code testing tools.
In that case you need to take the following steps to deploy the custom code with the additional libraries to MCS:

  1. Edit package.json
  2. Run npm install
  3. Test the code locally with mcs-ccc and mcs-test or postman (or cURL)
  4. Deploy the code with mcs-deploy

1. Edit Package.json 

In our case the package.json looks as follows:
{
  "name" : "sales",
  "version" : "1.0.0",
  "description" : "The API that facilitates ordering tickets based on a barcode.",
  "main" : "sales.js",
  "dependencies": {
    "bwip-js": "1.4.2"
  },
  "oracleMobile" : {
    "dependencies" : {
      "apis" : { },
      "connectors" : { }
    }
  }
}

2. Run npm install

Navigate to the root directory of your custom code API and install the npm modules you need in your project. In this case the module name is bwip-js. 
  npm install bwip-js
 npm install

3. Test the code locally

Make sure the code is working as expected before uploading it to MCS. This is described in a previous post.

4. Deploy the code to MCS

You can run the deployment as usual, from the MCS testing tools: 
mcs-deploy toolsConfig.json --verbose

Monday, July 17, 2017

Testing, packaging and deploying custom code using MCS custom code test tools

In a previous post I have described how to setup your MCS custom code test tools. In this post I will describe how to test, package and deploy your custom code using these tools. You should have installed the MCS custom code tool and updated the toolsConfig.json file with the correct url, mobile backend id and OAuth data.

Test your code

Once you have implemented your custom code, you want to test it. Of course you can test it by uploading the implementation into MCS. However, it is much better to test it locally and make sure it works, before you upload it to MCS. Since your custom code probably uses MCS platform APIs, it is convenient to use the mcs-ccc as a local 'container'. Note that when you run the test, it will call the platform APIs in your instance in MCS, so if you insert data in the database, it ends up in the cloud, even though you are running the code locally!

Components of the MCS testing tools and communication with MCS instance

You can either run the tests that are defined in the toolsConfig.json or you can run tests in Postman or cUrl.

When you want to run the test from Postman or cURL, simple point to localhost:4000 instead of the  MCS path. Don't forget that the mcs-ccc runs on http, not https.

The correct values for the port can be found in the output of the console when you start mcs-ccc in verbose mode.

Instead of running tests from Postman or cURL, you can also run tests that are automatically generated in toolsConfig.json:
  1. Run npm install
  2. Run mcs-ccc toolsConfig.json --verbose
  3. Open a separate command line
  4. Run mcs-test <path to toolsConfig.json> <testname> --verbose In this case mcs-test toolsConfig.json getProductsprices --verbose
  5. This returns the response in the command line
If you have a template parameter in your call, you have to hard code this in toolsConfig.json and run the test (you can spot these by looking for "PARAMETER").

Advantages of this approach are that the test is automatically generated. The disadvantage is that none of the results are validated. Last but not least, you have to hard code the parameters. For that reason we usually run real system tests in Postman. The testing tool can help the developer in the beginning, to make sure the code that is uploaded will run. It is more for 'smoke-testing' than for real testing of your code.

Package and deploy your code

Once you are ready to package and deploy your code to MCS, you can use mcs-deploy:

  1. Navigate to the package
  2. Run  mcs-deploy toolsConfig.json --verbose
  3. Enter the username and password when prompted
You should see the implementation in the MCS API implementation page.
sales 1.0 added as default implementation after running mcs-deploy





Sunday, July 16, 2017

Set up your MCS (development) environment: database creation policies

As mentioned in a previous post (Setup your MCS development environment: MCS custom code test tools, MCS is a cloud native platform that offers several platform APIs. One of these APIs is the database API.

It consists of two parts: 
  1. Database access. This is used by mobile applications and can only be executed from within custom code
  2. Database management. This allows you to create tables, remove tables etc. 
The first question you might ask is 'why on earth would I want to create tables in MCS, don't we have database cloud service and other sources for that?!?' 

Let me start by saying I agree with that. However, in this project we are starting with a simple API and we want to make them available as quickly as possible. The data will move (eventually) to the proper back-end system and in MCS we will use a connector to access this data that will be exposed on the Oracle Service Bus. However, at the moment the system does not contain the data and the Oracle Service Bus is not exposing services for this particular system yet. 

To save cost, minimize complexity and maximize time to market, we decided to use the database platform API. 

You can create database tables on the fly, using the Database_CreateTablesPolicy environment setting. This will cause a table or a column to be added or resized when you insert a row using custom code if it does not exist yet.

According to the documentation the following values can be used: 
  • allow: enables calls from custom code that perform implicit operations; 
  • explicitOnly: disables implicit calls from custom code;
  • implicitOnly: only implicit creation of database tables using custom code is allowed, the database management api can't be used;
  • none: curtails implicit calls from custom code.
This documentation is a bit unclear so let me elaborate on that:

ValueUse API in custom codeUse implicit calls in custom code 
allowyesyes
implicitOnlynoyes
explicitOnlyyesno
nonenono

These values are used to control the privileges for custom code, it does not control calling the database management API from outside of MCS (postman, curl etc).

There are several disadvantages to this approach:
  1. You can accidentally end up with multiple columns because of spelling errors ('address' and 'adress' for example);
  2. When unit testing custom code with 'faulty' data, instead of failing with the error you would get into your production environment (which is recommended by Oracle, to switch it off in production) you create new columns and the test fails with a different error (if it fails at all);
  3. We use the environment that we are working as a production environment. 
We decided to use "explicitOnly" and use the REST APIs to create, update and remove tables with Postman. However when I used one of the APIs I got the following response:


{
    "type": "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1",
    "status": 403,
    "title": "Forbidden",
    "detail": "403 - Forbidden",
    "o:ecid": "005L4J3QBg63j4C_nDWByZ0000x^0000yX, 0:3",
    "o:errorCode": "MOBILE-15229",
    "o:errorPath": "/mobile/system/databaseManagement/tables"
}

When creating a database table from outside the custom code (using the REST API and postman for example), you  need to call it with a user that has the role Mobile_DbMgmt. Unfortunately, there is no easy way to check your role from inside MCS, as can be seen in this picture

No easy navigation to inspect or change your role from within MCS
So, I opened a new tab and navigated to cloud.oracle.com and signed in again. This brought me to my service dashboard, and offers the opportunity to manage users, using the "users" button in the upper right hand corner.
Click users and find yourself. Check your roles and add [environment name] Mobile Database management to the roles.

Now I was able to create the table 😊.

NB: According to the documentation the default value is 'allow'. In our instance the value was set to 'explicitOnly', so make sure you check the value when you use the database platform API. 

  1. Login to MCS
  2. Click on "Administration"
  3. Scroll to the bottom of the page
  4. Click Export and save the policy file.
  5. Edit the policy (if needed)
  6. Upload policies.properties
The setting is applied instantly.

Thursday, July 13, 2017

Setup your MCS development environment: MCS custom code test tools

Oracle Mobile Cloud Service is a so called 'cloud native' product in the Oracle PaaS offering: it runs in the Oracle Public Cloud and there is no on premise variant in the Oracle Fusion Middleware stack.

In our current MCS project we have set up a number of things to make sure that we can achieve the same quality in our software development lifecycle (sdlc) as we have in our 'traditional' projects. One of the measures is the ability to run unit tests locally before deploying the code.

MCS is a cloud service that is based on node.js. It offers a number of platform APIs that you can use when creating custom APIs for mobile developers. The figure below shows these platform APIs: storage APIs to store files, Database APIs to store relational data, notifications etc. When you write a custom API, you use the platform APIs and you call connectors in the implementation.

When you use a tool like Netbeans (or any other javscript tool), these platform APIs are not available, which makes it hard to test your code without installing it on MCS.  To solve this problem, MCS offers MCS custom code test tools.

Description of intro-arch.png follows
Architecture of MCS (from the documentation)

MCS custom code test tools

It took me a while to discover the MCS testing tools, because I am a so-called 'service developer'. The testing tools are located in the SDK which is targeted at the role 'mobile app developer' who use, not build,  the custom APIs. (Hint @Oracle: please make the MCS tools available in the implementation page for the API). 

Prequisites
  • local version of node.js (6.10) and npm 
  • An account on Oracle Mobile Cloud Service (MCS) with role 'developer'
  • An API scaffolding to test the setup
  • A development tool of choice (in this case Netbeans)

The following steps need to be taken to go through the entire lifecycle: 
  1. Download the MCS custom code test tools
  2. Install the MCS custom code test tool on your machine using npm
  3. Setup your mobile backend for testing
  4. Set the custom API up for testing
  5. Run it locally (yay!! 😊) with the MCS custom code test tools

Download the MCS custom code test tool

1. Navigate to the MCS download page.

NB: don't use the downloal link on the applications page, it will bring you to a download page, without the mcs-tools folder!!

Don't click on this link!!












2. Select any SDK (every SDK contains MCS custom code test tools), unfortunately there is no separate download for service developers and download it (hint @Oracle)
3. Extract the file to a location of choice. Navigate to the mcs-tools folder and extract this folder into the location of choice. Note that the mcs-tools folder contains the mcs-tools folder. This is the folder you need. 

SDK files


mcs-tools directory containing mcs-tools directory


Install the MCS testing tool

  1. open a terminal session and navigate to the deepest mcs-directory: mcs-tools
  2. run npm to install the tool on your machine
  3. test the installation, it my case it returns 17.2.5
cd {sdk path}/mcs-tools/mcs-tools
npm install -g
mcs-test --version

Setup your mobile backend for testing

  1. Create a new API that will act as a proxy for your local tests by adding the OracleMobileAPI.raml in the "Create API" dialog. 
  2. Add the implementation by uploading the "OracleMobileAPIImpl.zip" to the implementation of the API.
  3.   
  4. Switch off "Login required" in the security tab. 
  5. Add the API to the mobile backend. 

Setup your custom API for local testing

  1. navigate to the root folder of your local API implementation (or the scaffolding if you did not create an implementation yet)
  2. run npm install
  3. update the 'toolsConfig.json' file and enter the mobile backend id, the anonymous key and the OAuth data that are used to deploy to MCS. They can be found in the mobile backend settings page. For obvious reasons, I don't show the details here 😉

Run it locally

1. Open a terminal
2. Run the mcs-ccc with the correction options (you can also use --debug to debug in Chrome)

mcs-ccc toolsConfig.json --verbose

The result is:
C:\Users\ldikmans\Documents\api-straat\product\productapi> mcs-ccc .\toolsConfig.json
Warning: Configuration property "proxy" is undefined
To display help and examples associated with warnings, use the --verbose option

Ping OracleMobileAPI to verify that OracleMobileAPI-uri and authorization are correct.
OracleMobileAPI ping succeeded!
The Node server is listening at port 4000

The downside of this approach is that you have added a 'foreign' API to your mobile backend. You can handle this in two ways, depending on how many environments you have:
  1. Remove the OracleMobileAPI from the mobile backend as soon as you publish it
  2. Remove the OracleMobileAPI from the mobile backend in your test or production instance. 

In the next blog I will describe how you can test, package and install your custom code into MCS, using 'mcs-tools'. 

Happy coding 😊