Red Mavericks articles

ADF Applications – Download DVT Charts in ADF 12c (and other technologies)

Hi all and welcome to a new article on Red Mavericks!

As we mentioned in past articles ADF 12.2.1 comes with several corrections of customer-reported and internal (unpublished) errors, along with new features that can be useful when developing applications. Recently we worked with ADF Visualization Components and we faced a big problem: there is no easy way to export a rendered DVT chart!
After doing some research in Oracle community and blogs we found that for ADF 11 there was one way: link

Unfortunately for ADF 12c the DVT class hierarchy was changed, and the previous method could not be used anymore. We did not find a way of using ADF classes to export the charts. The only suggestion was to use the <af:printablePageBehavior> to print the page. However, this will not fulfill the client’s needs so we had to try other ways.

With all these facts in consideration, we chose using JavaScript in order to export the charts generated by ADF. To clarify any misleading concepts, ADF charts are rendered as Scalable Vector Graphics (SVG).

This article will present a generic way to export and download the Scalable Vector Graphics as an image using JavaScript. In other words, the method presented can be used with ADF but also with other Application Development.

JavaScript libraries

In order to implement this solution the following JS libraries are required:

We tried some JS libraries (dom-to-image and html2canvas) to help us export the ADF charts and faced several problems mainly because of browsers incompatibilities. The canvg.js was the one that was used to put the pieces together in order to generate the chart image exportation (the RGBColor.js and StackBlur.js are dependencies of the canvg.js). You can find more information about these libraries here.

Briefly the steps to export the DVT chart are:

  1. Get the SVG component
  2. Convert the component into a canvas element
  3. Then export as an image

Importing JavaScript libraries into JSF

To include the JS scripts into your ADF application, copy the JS files to your application, for example:

Then insert the following lines into your main .JSF file:

<af:resource type="javascript" source="/resources/js/exportCharts/RGBColor.js"/>
<af:resource type="javascript" source="/resources/js/exportCharts/StackBlur.js"/>
<af:resource type="javascript" source="/resources/js/exportCharts/canvg.js"/>

Change the directory where you inserted the JS scripts. In this case was in /resources/js/exportCharts.
Be careful that the JS import order is important! The RGBColor and StackBlur must be imported before canvg!

Drawing the ADF DVT chart

For this article we will focus on dvt:barChart example. However, the solution here presented is tested with dvt:barChar, dvt:pieChart and dvt:lineChart (with “polar” and “Cartesian” coordinateSystem).

<af:panelGroupLayout id="pgGroupChart" 
                        clientComponent="true"
                        layout="vertical">
     <dvt:barChart orientation="vertical" 
                      id="barChart1"
                      var="row" 
                      value="#{bindings.EmployeesVO.collectionModel}
                      styleClass="AFStretchWidth">
                        <dvt:chartLegend rendered="true" id="cl1"/>
                         <f:facet name="dataStamp">
                            <dvt:chartDataItem id="di1" 
                                                  series="#{row. series}" 
                                                  group="#{row.group}" 
                                                  value="#{row.value}" />
                        </f:facet>
     </dvt:barChart>
</af:panelGroupLayout>

Note that we surrounded the ADF DVT with a panelGroupLayout in order to get this element for the exportation. Getting this element via JavaScript is important to set the clientComponent property to true.

Obtaining ADF Component ID for JavaScript

In run time, ADF generates a unique ID for every html component based on html hierarchy and the given component id. It is difficult to know the component id in runtime (also known as client component id), because sometimes, for example, when we reuse page fragments in the same ADF page, it auto generates the ids to make them distinguishable across different instances of the page fragments.

With this in mind, we can not simply get the element with the fixed ID in JavaScript. Instead, we need to provide the component client ID as an input parameter to the JavaScript.
For this you need to:

    1. Add the following methods to your Manage bean:
RichPanelGroupLayout chartGroup;

public RichPanelGroupLayout getChartGroup(){      
  if(chartGroup == null)
          chartGroup = (RichPanelGroupLayout)AdfUtils.findComponent("pgGroupChart");

  return chartGroup;
}
    
public String getClientChartGroupId(){
      FacesContext ctx = FacesContext.getCurrentInstance();
      return this.getChartGroup().getClientId(ctx);
}

This code allows us to get the client id from the panelGroupLayout that surrounds the ADF DVT we want to export.

    1. In jsf/jsff page
<af:button text="EXPORT_LABEL"
                  partialSubmit="true" 
     clientComponent="true"
     icon="/resources/images/submit.png" 
     id="btExport">
   <af:clientListener method="exportRadialChart" type="action"/>
   <af:clientAttribute name="componentClientId"value="#{attrs.manageBean.clientChartGroupId}"/>
</af:button>

Then we add a button that we will be used to export the chart in the jsf/jsff page. Notice that the <af:clientListener> contains the JavaScript method name, which will be called when the user activates the export button. Another important property to take into consideration is the <af:clientAttribute> that allows passing the panelGroupLayout client id to the JavaScript code. In the next session, we will explain how to access this attribute in JS.

    1. Javascript Code:
<af:resource type="javascript">
function exportChart(actionEvent){
  var actionComp = actionEvent.getSource();
  var componentClientId = actionComp.getProperty("componentClientId"); 
  var b1ComponentRef = document.getElementById(componentClientId);

Next, add the JS script to your page. Here you can see the function body named exportChart that will receive the actionEvent. The actionEvent contains the <af:clientAttibute> previously stated on the HTML code and the line 3 and 4 shows how to get its value. Also, take into consideration that the String componentClientId that is passed as a parameter to the getProperty function must be the same string as the name attribute in the <af:clientAttribute>.

The last line of code is used to get the element in the HTML page using the given client id parameter.

For more information about this read the “Pattern For Obtaining ADF Component ID for JavaScript” – link.

Using convg library

In order to use convg we need to:

    1. Get the SVG we want to export
var svgElements = b1ComponentRef.getElementsByTagName("svg");

//In our case we only have one SVG inside this panelGroupLayout      
if(svgElements != null & svgElements.length > 0) {
       var currentSVG = svgElements[0];                    
  1. Convert the SVG HTML element into an XML string
    1. Replace the SVG namespace if it exists – some browsers show an error when using canvg with this namespace
    2. IMPORTANT: The SVG width and height cannot be in percentage values. If this occurs you need to remove it from your HTML or replace them with the fixed values. Without this change, there will be problems with IE and some dvt charts like < dvt:lineChart&gt; with “coordinateSystem=polar”
           //convert SVG into a XML string
           var xml = (new XMLSerializer()).serializeToString(currentSVG);
                         
           // Removing the name space as IE throws an error
           xml = xml.replace(/xmlns=\"http:\/\/www\.w3\.org\/2000\/svg\"/, '');
                        
           //Replace width and height %
           xml = xml.replace(width="100%", '1350');
           xml = xml.replace(height="100%", '300');         
    
  2. Create a temporary variable with a new canvas element
  3.        var canvas = document.createElement("canvas");         
    
  4. Call canvg function with your temporary canvas and the XML containing your SVG element. Note that you can see any error throw in canvg library if you remove the comment of the alert messages in the catch block. This helped us find and debug errors that occurred before. In alternative, you can write the errors to the console
  5.        //draw the SVG onto a canvas
           try {
                canvg(canvas, xml);
           }
           catch(err) {
                //debug
                //alert(err.message);
                //alert(err.stack);
           }  
    

Download chart image

The last step is to download the generated image. For this, the code for IE is different from the Chrome and Firefox browsers. Also, Firefox requires some parameters that in Chrome are set by default.

The code here presented was tested for these 3 browsers.

     if (canvas.msToBlob) { //for IE
         var blob = canvas.msToBlob();
         window.navigator.msSaveBlob(blob, 'content.png');
                        
      } else {
         var dataUrl = canvas.toDataURL("image/png");
         var donwloadImage = dataUrl.replace(/^data:image\/[^;]*/, 'data:application/octet-stream');
                        
         var link = document.createElement('a');
         document.body.appendChild(link); //required in FF, optional for Chrome
         link.setAttribute('download', 'content.png');
         link.setAttribute('href', donwloadImage);
         link.target="_self" ; //required in FF, optional for Chrome
         link.click(); 
      }
};

Conclusion

The solution here presented is based on JS code and ADF compatibility. There is no easy way to export a feature on ADF 12c and we showed our solution that fulfilled our client’s needs. In order to export ADF dvt charts we used a JavaScript library called canvg because, after several tests with JS libraries, we found that canvg provided the best compatibility between browsers.

Take into consideration that the JS code can be used for other web technologies, not only for ADF applications. I hope this post helps you and speeds up any problematic situation you could face in this process.

Keep checking out Red Mavericks for additional tips on Oracle Middleware technology.

Cheers,
Pedro Curto

JDeveloper – Increase Productivity Using Shortcuts

Hi all and welcome to a new article on Red Mavericks!

Integrated Development Environments (IDEs) are useful tools to maximize productivity for developers. This article will present how to configure new shortcuts and some very important advanced shortcuts available by default in JDeveloper that can be very useful to speed up development process.

Introduction and Configuring Shortcuts

In order to maximize programmer productivity, one important aspect is the use of some shortcuts. We all know some basic shortcuts like “CTRL+X” (to cut) or “CTRL+C” (to copy) and we use it every day as they save a lot of time. So, essentially, the shortcuts help you work more effectively and speed up almost everything you do.

In other hand, long lists of keyboard shortcuts can quickly become overwhelming if you’re just getting started. The key to learn and memorize the shortcuts is to practice every day with a smaller but growing set.

In order to help you, I will explain how to see the list of shortcut keys in jDeveloper and, also, how to configure your own.

Firstly, to see the list of shortcut keys available in your jDeveloper, you can select “Tools->Preferences…” that will open a window with the “Preferences” of your environment. In that window select “Shortcut Keys” in the left. A list with all the shortcuts available are presented.

 

After a quick inspection on that list you notice that there are a lot of actions without a shortcut key associated. Sometimes, you will find very useful to add shortcuts to some of those actions. For example, in my project I use the “Generate Accessors” action a few times, especially when I create a new java class and my jDeveloper didn’t had any shortcut associated for that action. In order to add a shortcut simply search for the action you want to add it, in my case “Generate Accessors” and in “New Shortcut” input add the desired key(s).

Notice that shortcut keys only can have one action associated to it, otherwise a conflict will be rise and the new shortcut key will be ignored.

 

 

Useful Advanced Shortcuts in JDeveloper

 

Shortcut Action Example
CRTL + SHIFT + C Copy the file path
  1.  Select the file in Projects Tab
  2. Press CTRL+SHIFT+C to copy the file path
  3. Open the explorer and simply past the path with CTRL+V
ALT + HOME Switch Projects Tab content to the Application/Project/Package of the open file Press ALT+HOME in any open file to automatic change the Project Tab content to the location of that file.This will give you information about the Application, Project and package that contain the current open file.

CRTL + Minus Go to File  
CTRL + SPACE + SPACE Auto complete with new variable declaration  

 

Conclusion

After reading this article you know how to configure your own shortcuts in JDeveloper and four important advanced shortcuts that can be very useful in development process. For example, you can easily search and open a class in your application using only a partial name (CTRL + Minus), find the previous class in the Projects tab (CTRL + ALT + HOME) and if you desire find the path to that class in your system (CTRL+SHIFT+C).

Hope you enjoy this article and if you know other important shortcuts share with us in the comments.

Keep checking out Red Mavericks for additional tips of Oracle Middleware technology

Cheers,
Pedro Curto

Post image by Pedro Curto

Designing our Oracle JET Application – Creating our tables in the Oracle Database Cloud Service

Welcome to the 4th article on our not-so-new Blog series about Oracle JET and Oracle Cloud.

Today we’ll take our Logical Data Model design from the last article and create the necessary tables directly on our cloud database instance.

Let’s start the party!

The SQL Developer data model generation tools

Launch the SQL Developer and import the design you worked on the previous article.

Import the Data Model into Oracle SQL Developer

Import the Data Model into Oracle SQL Developer

Choose your Data Model Design File

Choose your Data Model Design File

So here’s our Logical Data Model, in all its splendor:

The Logical Data Model

The Logical Data Model

Now we need to create the Relational Model, based on our logical model. Lucky for us, this means pressing a button in SQL Developer. When you open your Logical Model, a specific option called “Engineer to Relational Model” appears, allowing you to produce the Relational Model directly. And the good thing is that there’s also a button to “Engineer to Logical Model”, which means you can make your own small adjustments and changes, and reflect those back into the Logical Model.

Find the SQL Developer Data Modeler Logical Model toolbar and click on the “Engineer to Relational Model” option.

The Engineer to Relational Model option

The Engineer to Relational Model option

Ooops… what’s this?

Warning Engineering to Relational Model - No Relational Model created

Warning Engineering to Relational Model – No Relational Model created

You need to create the model first (or open one, if you already have one created). Go to your Data Modeler browser, find the Relational Models folder and right-click it to create a new relational model.

Right click to create a new relational model

Right-click to create a new relational model

Now you can try again to engineer your Logical Model. This time, an Engineering dialog appears. Make sure all checkboxes on the Logical side are selected so that everything in the Logical Model is transformed into Relational Model artifacts. There are a few options that you can use to finetune this process, but we’ll leave everything in default. Click on Engineer. Et voilá! Your Relational Model is now generated.

The Engineer to Relational Model dialog

The Engineer to Relational Model dialog

Here’s our application Relational Data model. Although it’s closer to what we’re going to have on our application (now we talk about Tables and Columns and Primary Keys), it’s still not the actual data model implementation.

The Relational Model

The Relational Model

The final step to have our model running inside the cloud database is to produce the actual Physical model and run it, which is to say, generate the DDL and execute that inside our cloud database. The DDL generation is achieved also with the press of a button, based on our relational model.

There's a specific option to generate the DDL from the relational model

There’s a specific option to generate the DDL from the relational model

As the DDL code is specific to a given RDBMS, this option asks what specific version of DB engine are we using and from which relational model do we want to use to generate it. Notice that the current SQL Developer version also features DDL generation for DB engines other than Oracle.

DDL generation tool features multiple DB engines

DDL generation tool features multiple DB engines

As we choose Oracle DB 12c as our cloud database engine, we’ll choose that option and click on “Generate”. SQL Developer will then show you the DDL generation options specific to that DB engine. Naturally, Oracle DB DDL generation have more options than other engines. For the purpose of this exercise, we’ll just use the default settings and click OK.

The DDL Generation Options dialog

The DDL Generation Options dialog

The DDL was generated, but with some errors

The DDL was generated, but with some errors

Our DDL generation produced some errors, but we can’t see them. SQL Developer says that we should check the Design Rules for more details, so that’s what we’ll do. Press “Generate” again to take you to the DDL generation dialog and click on “Design Rules”, in the lower left corner to check for the errors.

The Design Rules option

The Design Rules option

It shows up a Design Rules dialog where you can apply the rules and check eventual problems. Doing so shows a few warnings and a lot of errors, but only one seems to stand out: The Evaluation.value Column datatype is unknown.

Checking the Design Rules validations to find the problem with the DDL generation

Checking the Design Rules validations to find the problem with the DDL generation

We seem to have forgotten to set the Evaluation.Value datatype. It should be a Domain value list, so let’s do it, as we did before. The best way is to correct all the way up in the logical model, where the problem is. The good thing is that correcting it is easy and propagating that correction all the way down to the physical level is also quite easy: just generate an updated relational model and then re-run the DDL generation tool.

The error message still exists, but it’s just a matter of having comments associated with the tables, so we can just ignore them. Just save the data model generation code that SQL Developer produced. We’ll now apply it to our cloud database.

Executing our DDL in our Cloud Database

Do you remember the user we created in the second article of the series, OJETBLOG?

We’ll use that to execute our DDL. First, create a DB connection using OJETBLOG to our cloud database and connect to it. Then, open the DDL file you have previously saved and select the OJETBLOG connection from the connection list. Finally, execute the DDL script and commit those changes. Your data model should now be fully deployed in the cloud.

Creating a connection for the OJETBLOG db user

Creating a connection for the OJETBLOG db user

Execute the DDL script with the OJETBLOG user

Execute the DDL script with the OJETBLOG user

So if you do a simple

SELECT * FROM EVALUATION;

you can see that the table is already created.

Get data from the recently created table

Get data from the recently created table

All the tables created from the relational and logical models, now fully deployed in the cloud

All the tables created from the relational and logical models, now fully deployed in the cloud

And that’s it! Your data model is fully deployed and running in your Oracle DB Cloud Service.

Wrap up

The data model is done and implemented. Next time, we’ll start building our application, from top (UI/JET) to bottom (NodeJS). Stay tuned.

José Rodrigues, a.k.a. Maverick

Post header image by: nigelpepper

The application Data Model

Designing our Oracle JET application – The Data Model

Welcome to the 3rd article on our new Blog series about Oracle JET and Oracle Cloud.

Today we’ll start designing our application, starting with its Data Model. For that, well be focusing on Oracle’s SQL Developer Data Modeler as our tool and design the application’s underlying data model.

So, without further ado, let’s dig right in.

Data Modeling Workflow

Let’s start our SQL Developer and go right into the Data Modeler.

Open up the model browser and save the existing design with a proper, understandable name. I chose “OJetBlog-DataModel”.

Acessing the Data Modeler Browser

Accessing the Data Modeler Browser

Save the design to give it a proper name

Save the design to give it a proper name

Once you have done this, you can start working on your Logical Model. As you know, there are several models to represent your data model, from the most high level (not bound by the RDBMS) to the Physical Model that is totally dependent on the RDBMS.

For our exercise, we’ll model our application in our Logical Model, pass it through to the Relational Model, and the Physical Model, through the generations of specific DDL for our Oracle Cloud Database. Any changes that we need to make in our database will be performed at the Logical level and then, using the SQL Developer tools, passed through our workflow and finalized in a DDL that will be executed on our DB. Keeping this workflow ensures coherence in your designs and a properly documented and maintained DB.

Creating our Logical Data Model

So, using the Data Modeler browser, locate the Logical Model, right-click it and then choose “Show”. This will show your Logical Model diagram, which should be a blank canvas by now.

You Logical Data Model - right now, still a blank canvas

You Logical Data Model – right now, still a blank canvas

The image above highlights the tools available to create your logical data model. You can create Entities, Views and relationships, place notes and images, etc… For this exercise, I’ll create our entities and their corresponding relationships.

First, let’s think about what entities should be involved in our application and their relationships in terms of cardinality. As a quick summary of our project’s goal, we’ll be able to create an Evaluation of a Worker’s performance during a Project Sprint. A project will have a lot of sprints, and each sprint will have a series of evaluations, one per each worker. Of course, workers will perform several sprints also, sometimes even overlapping (working in two distinct sprints at once, on different projects). So, we’ll have:

  • Entities
    • Worker – one of our company’s employees involved in a given sprint.
    • Project – one of our company’s projects.
    • Sprint – a project sprint with a given start and end date that roughly equates to a set of work items being performed by several workers within a project scope.
    • Evaluation – an evaluation of a given worker on a given sprint.
  • Relationships
    • Worker (N) – Sprint (M)  – One sprint relates to the collection of project work done by several workers on a given period. On the other hand, a worker necessarily works on more than one sprint. So this equates to an N:M relationship between these two entities. As you know, N:M relationships are typically bad and we should try to avoid them. I try to resolve this issues immediately in the Logical Model. I think it has a lot of advantages tackling this as soon as possible. So, I’ll create a relationship entity to transform this N:M relationship into two 1:N relationships. To see more on this, check out this article on database normal forms, specifically the 3NF.
    • Project(1) – Sprint (N) – Each project will have multiple sprints, but each sprint belongs to only one project. A simple, classic 1:N relationship.
    • Worker (1) – Evaluation(N) – Each worker will have multiple evaluations, but each evaluation will refer to only one worker.
    • Sprint(1) – Evaluation(N) – Each sprint will have multiple evaluations, but each evaluation will refer to only one sprint.

So, let’s create our entity. Select the appropriate icon in the toolbar, and then draw the entity in the canvas. You’ll be presented with the Entity Properties dialog.

Create a new Entity - Icon in the toolbar

Create a new Entity Icon

Create a new Entity - Entity Properties dialog

Create a new Entity – Entity Properties dialog

From here, you can specify your entities to the tiniest detail, but we’ll focus on the first two sections: General and Attributes.

In the General Section, I’ll just specify the name of the Entity, in this case, Project. Then I’ll go into the Attributes section and specify the attributes of a project, as seen in the dialog below.

Create a new Entity - Project Entity Attributes

Create a new Entity – Project Entity Attributes – Notice the creation and editing toolbar (highlighted)

Attributes can be created using the toolbar. In this case, I created the project ID, project name, and customer name. If this would be used for anything else, I would probably have Customer as an ID to a Customer entity, but for our exercise, this is enough. Please notice that in the case of the ID, I checked the “Primary UID” checkbox, which should set this attribute as the primary identifier of this entity. When you check it, it automatically also checks the “Mandatory” checkbox. So this takes care of the primary keys.

But when you have relationships between entities, you should also have Foreign Keys. I bit on that later, when we create our relationships.

Go ahead and create the Project entity and the Sprint entity as well, using something like these attribute definitions:

Create a new Entity - Using Domains

Create a new Entity – Using Domains

Most of the attributes have a Logical Data Type. Logical Data Types are the ones which are not associated with a specific business domain. You can think of them as the usual types, such as numeric, boolean, date, string (VarChar), etc… As you can see, the first three attributes are of logical data type (Numeric and Date), but the fourth one isn’t. It’s a Domain-based attribute, which means that it has a specific structure specific to a business context. We want our Sprint Status to take only a given set of possible values and we do this by setting a specific Domain type.

Specifying Domain Types

To manage your domain types, go to the Domain Administration Tool. In there, you can create your own domain types, as I did for the Sprint Status. Let me show you how.

Manage your Domains using the Domains Administration Tool

Manage your Domains using the Domains Administration Tool

Select the option to add a new Domain type, give it the name you want and then specify to which logical data type will this map to. In our case, we chose the VARCHAR data type, with Char unit and a size of 20, which is enough to hold a sprint status.

Manage your Domains - The Sprint Status properties

Manage your Domains – The Sprint Status properties

Once the logical type is set, we’ll define the list of all possible values by going to the Value List option. In there we add our Sprint statuses, as shown in the next image.

Manage your Domains - The Sprint Status list of all possible values

Manage your Domains – The Sprint Status list of all possible values

And that’s all there is to it. Ah… don’t forget to save it. Your domain type is created and ready to be used when defining your attributes.

Entity Relationships

Entities have relationships between them, as I mentioned before. We even characterized them so now it’s time to create them in our diagram.

Creating relationships is very simple. You select the type of relationship you want to create between two entities by clicking on the respective icon in the toolbar. Then you click on the Source entity and then on the target entity and the Relation Properties dialog appears for you to specify its details.

Let’s take the example of the Project and the Sprint entities. Create a relationship between the Project and the Sprint entities.

Creating a 1-N relationship between Project and Sprint

Creating a 1-N relationship between Project and Sprint

The relation properties dialog appear and you can fine-tune your relationship. I just click ok. 🙂

Creating a 1-N relationship - The Relation Properties dialog

Creating a 1-N relationship – The Relation Properties dialog

You’ll notice that there’s a new attribute in the Sprint entity called ID1. This corresponds to a Foreign Key to the Project Entity and is automatically added when you create a 1:N relationship on the N side.

I go to the Sprint properties dialog (double-click on the entity in the diagram), Attributes and double-click on the ID1 attribute from the list of attributes. I then change the name of the attribute in the newly opened dialog to ProjectID, as it refers to the Project ID in the Project entity.

Creating a 1-N relationship - The Foreign Key attribute

Creating a 1-N relationship – The Foreign Key attribute

And this is how you create your relationships. Time to create our complete Logical Model.

The full Logical Model

So now, you only need to create your logical model using the instructions I explained earlier. For the sake of time, let me just take my ready-made pie from the oven, which is to say, show you my complete Logical Model. Here it is in all its glory. I use the Bachman notation, but SQL Developer also supports Barker and Information Engineering notations. Try them out and see which is better for you. This article on Wikipedia is an excellent starting point to understand the different notations.

Full Logical Data Model - Bachman Notation

Full Logical Data Model – Bachman Notation

Full Logical Data Model - Barker Notation

Full Logical Data Model – Barker Notation

From these two diagrams, you can understand the Entities, the Relations and the respective attributes involved. The only thing missing is the list of values associated with the Project_Role domain type that you can see in the Worker Sprint entity. Here it is:

Project Role Domain

Project Role Domain

 

 

Now all you have to do is create this model yourself.

I’m pretty sure it can be improved, as I only have some basic database modeling knowledge. Feel free to place suggestions in the comments

Wrap up

Our Database model is designed. Next time, we’ll go through the Relational and Physical Models and put the actual database artifacts in our Cloud database. Stay tuned!

P.S: next week there will be no article, as I’ll be enjoying some Carnival holidays! 🙂

José Rodrigues, a.k.a. Maverick

 

Post image by Jon Olav Eikenes

Cloud Database

Using the Oracle Cloud with our Oracle JET applications

Welcome to the 2nd article on our new Blog series about Oracle JET and Oracle Cloud.

Today we’ll be working on two main subjects:

  • Setup our Oracle Cloud account, as well as the cloud services we need (Database, Storage. The Application Runtime Cloud Service will be configured in another article and maybe a few others along the way…)
    • If you already have an account and configured the necessary services, you can skip this.
  • Configuring our Oracle SQL Developer to connect to our cloud Database.
  • Create a test table in the Cloud.

Without further ado, let’s dig into the cloud.

Setup your cloud account

The first thing you need to do is to create your Oracle Cloud account.

You can go to Oracle Cloud’s homepage at https://cloud.oracle.com and either buy services outright or opt for a trial. In our case, we’ll go for the trial option. Click that green “Try for Free” button.

The Oracle Cloud homepage

The Oracle Cloud homepage

You’ll be taken to the trial page, where you can create your free account (use the “Create a Free Account” button). This page also has some estimations on how much time will the free 300 USD cloud credits grant you. From my personal experience, those estimations are way off the mark. The real numbers are much lower.

Trial page

Trial page

Anyway, once you click the “Create a Free Account” button, the site will take you to a sign-up form, where you’ll fill in your information and supply a mobile number for verification. Please take into account that you must provide a real mobile number, as Oracle will send you a confirmation code that you’ll need to put on the sign-up form. Also, pay attention to the Default Data Region: you should select the data center that is closest to you, to increase the overall performance.

The sign-up form

The sign-up form

The verification step

The verification step

Once the account is verified, you’ll be able to add a payment method. As explained, it will not charge you anything, but it’s a required step. Just place your credit card information and billing data and then accept the terms and conditions.

Payment and Terms

Payment and Terms

And you’re good to go (you’ll be as soon as the account is prepared, which can take up to 15 minutes, but usually takes 2). Just wait for the confirmation e-mail, which will also give you your Oracle Cloud access details (credentials).

Mail coming from Oracle - Ready to start

Mail coming from Oracle – Ready to start

Now your account is up and running. Time to add a few cloud services and start to spend those free 300 USD. Click on that “Get Started with Oracle Cloud” button, and it will take you to the sign in screen. Fill in with the Access Details supplied in the e-mail. Don’t worry… it will require you to change the password immediately.

Oracle Cloud Login Screen - Place the credentials supplied in the e-mail that was sent to you

Oracle Cloud Login Screen – Place the credentials supplied in the e-mail that was sent to you.

New password screen - Respect the password rules shown in the screen

New password screen – Respect the password rules shown on the screen

You are then taken to the “My Services Guided Tour” and, for the sake of simplicity, just click on the Dashboard icon (either on the side menu or the top bar). You’re ready to start creating new cloud service instances.

Accessing the Cloud Dashboard

Accessing the Cloud Dashboard

The My Services Dashboard - Where it all begins

The My Services Dashboard – Where it all begins

As this information will be need later, take notice of the Identity Domain associated with your account. In my example, the domain is “ojetblog”. Take notice of what it is in your particular case.

Creating a new Cloud Service

Good! Your Oracle Cloud account is all set up and ready to be used. So now it’s time to create new services.

Creating a Storage Classic cloud service

As Storage Classic is something that most/all services will need, let’s just go ahead and create it. Click on the “Create Instance” option in the Dashboard and then choose the “Create” button in the “Storage Classic” area.

Create a new cloud service, in this case, the Storage service

Create a new cloud service, in this case, the Storage service

When you try to create a Storage service, it will ask you about the “georeplication” policy associated with that service. Select one from the available options. For real-life scenarios, please read through the guidelines for selecting a replication policy and make your choice according to what’s best suited for you.

Select a georeplication policy

Select a “georeplication” policy

Your Storage cloud service is now ready to be used.

Going back to the My Services Dashboard, you’ll notice that the Storage service you just created doesn’t appear on it. That’s because some cloud services are not automatically shown on the Dashboard. You can set it to show up using the “Customize Dashboard” option and selecting the ones you want to show. Locate the Storage Classic service and click on Show. Et voilá… the Storage service is now showing on your Dashboard.

Customize your dashboard to show up the services you want

Customize your dashboard to show up the services you want

The Storage Classic cloud service now showing up on the Dashboard

The Storage Classic cloud service now showing up on the Dashboard

Creating a Database Cloud Service

Once the Storage Classic service is set up, let’s try to create the Database Cloud Service. Once again, click on the Create Instance option on the Dashboard and then create a Database service.

Create an instance of Database cloud service

Create an instance of Database cloud service

The system now presents some QuickStarts (a few weeks ago, it didn’t) but, for the sake of education, we’ll go the hard way and choose the Custom option.

Database service Quickstart screen - Choose the Custom option

Database service Quickstart screen – Choose the Custom option

The Database Cloud Service set up wizard

The custom option will take you to a 3-step wizard, in which you’ll supply the necessary information to create your very own database in the cloud. The first step is where you define the basic characteristics of your database: Which version, edition, and type (single instance, RAC), as well as the name and on which region should it be created. For the region, try to choose the same as you did for the georeplication of the Storage service.

Creating a Database Cloud Service - Step 1 - Defining the basic characteristics of the service

Creating a Database Cloud Service – Step 1 – Defining the basic characteristics of the service

After this, we go into more detail and specify the characteristics of the machine (shape) that will hold our database, as well as backup and recovery configurations. Fill in the information in step 2, taking a few things into account:

  • Supply an SSH Public Key – This step allows me to create a new SSH key and download it. I always choose this way (instead of providing an existing one on my own)
  • Cloud Storage Container – This field will hold the Storage container URL. This has the following syntax:
    • <Storage Cloud Service REST Endpoint>\<container name>
    • The way to get the Storage Cloud Service REST Endpoint is to go your My Services Dashboard, click on the Storage service, and in the Overview information->Additional Information there’s the REST Endpoint there. Just copy it and use it. The container name is anything you like. I typically use dbcsbackup.
  • Create Cloud Storage Container – I choose this option to allow the system to automatically create a storage container for my backups
  • Backup and Recovery Configuration
    • Username and Password – Use your Oracle Cloud Account Credentials (the ones you use to log in to your My Services Dashboard)
Creating a Database Cloud Service - Step 2 - Defining the shape and backup options

Creating a Database Cloud Service – Step 2 – Defining the shape and backup options

You can click on Next and it will show your configurations and ask you for a confirmation to create the Database. Click on Create and you’re done.

Database service confirmation step - All that you need to do now is press Create

Database service confirmation step – All that you need to do now is press Create

You’ll be taken to the Database Services list where you can see your service being created. It will take more than a few minutes (around 30 minutes).

Database service creation process running. it takes a while

Database service creation process running. it takes a while

Database service started and running

Database service started and running

You now have your Database running in the cloud. How cool is that?!

We’ll skip setting up the other cloud services for now, as we’ll focus on the database in these first parts.

Onwards to connect our SQL Developer to our recently created Database Cloud Service.

Connecting the Oracle SQL Developer to our Database in the cloud

Begin by launching your Oracle SQL Developer. Once it’s launched, locate the connections window and create a new connection

Create a new connection in SQL Developer

Create a new connection with SQL Developer

Just fill in the form for creating a new connection with the following:

  • Connection Name – Anything you like. I used “Oracle JET DB Cloud”
  • Username – Any username of that database. For the sake of simplicity, and because we still didn’t create any specific users, I just used SYS.
  • Password – Remember filling in the Database Administration Password in the second step of configuring your Database Cloud Service? That’s the password you put here.
  • Save password (optional) – Just so I don’t have to type the password everytime I connect to the database.
  • Connection Color (optional) – I like to color code my Cloud connections in blue, but it’s just a personal choice. Do as you please.
  • Connection Type – Basic
  • Role – If you choose SYS as your username be sure to select the SYSDBA role.
  • Hostname – This is the public address of your Cloud Database. You can check it by accessing the detailed information of your Database cloud instance.
Accessing the Database Cloud Service Detailed information

Accessing the Database Cloud Service Detailed information

Database service detailed information - Check the Public IP and place it in the SQL Developer Connection dialog

Database service detailed information – Check the Public IP and place it in the SQL Developer Connection dialog

  • Port – Just leave the default 1521 (if you didn’t change it when configuring the database cloud service)
  • SID – Remember filling in the DB Name in the second step of configuring your Database Cloud Service? That’s what you put here.

 

Creating a connection to our cloud Database

Creating a connection to our cloud Database

 

Now Save it and Test it.

Error connecting to our cloud database

Error connecting to our cloud database

Oops… why can’t we connect to our cloud database? All configurations seem correct so… what’s wrong?

Tha’s because the database cloud instance has very strict access rules by default. I’ll check them out and see if I can understand what’s the matter. I’ll access the context menu of my Database cloud service and select the Access Rules option.

Check the database access rules

Check the database access rules

As you can see, most of the ports are blocked. In particular, the DB listener is also blocked. Go to the “Actions” menu of the ora_p2_dblistener and enable it. Notice that the icon changes and the red cross is no longer there.

Enabling the access to the db listener

Enabling the access to the DB listener

A few seconds later, the DB listener port is open. You can now Test the connection back in SQL Developer and … Success!

Success connecting to our cloud database

Success connecting to our cloud database

You’re now all set to use Oracle’s SQL Developer to create your artifacts inside your Cloud Database. Let’s do a small test to ensure that this really works.

Creating our first table in the cloud

The first thing we need to do is to create a user and its respective schema so that we can then create a table inside it. Let’s create a new SQL Worksheet and select the connection you created to the cloud database.

Create a new session with a SQL Worksheet - Either press Alt+F10 or click on the New Worksheet icon

Create a new session with a SQL Worksheet – Either press Alt+F10 or click on the New Worksheet icon

As you may know, there are now (since 12c and the concept of Containers and Pluggable databases) two types of “normal” users (forget the admins, sys, etc…)

  • Common users – users created at the Container Database (CDB) level.
    • These users are recognized inside the CDB and all its current and future PDBs.
    • Generally used for administrative purposes, such as managing the PDB’s.
    • These usernames must also start with C##.
    • As a general rule of thumb, don’t create these types of users unless you understand their full implications.
  • Local users – users created at the Pluggable Database (PDB) level.
    • Your database users in the sense we all know from 11g and before.
    • These are the users that we’ll use (pardon the pun) in our applications

An interesting and more complete description of what Common and Local users are is found here: https://dbasolved.com/2013/06/29/common-user-vs-local-user-12c-edition/

So we need to create our own pluggable database user. I called it OJETBLOG. First, make sure you’re working at the PDB level. When the cloud database was created I set the PDB name as PDB1. You can see that is the name inside my container database.

Identify the PDB name - In this case PDB1

Identify the PDB name – In this case PDB1

Once you have the PDB name, perform set it as the context in which you’ll perform the next actions, using the Alter Session command.

Then, proceed with the creation of the User itself, grant some privileges to it and then create the table in the respective schema.

The commands to do is are as follow. Select the entire text and press Alt+Enter to execute or click on the Play button

ALTER SESSION SET CONTAINER=PDB1; -- Set the working context to PDB1
CREATE USER OJETBLOG IDENTIFIED BY <password>; -- Creates the user with the supplied password
GRANT CONNECT,RESOURCE,CREATE TABLE TO OJETBLOG; -- Set some basic privileges to that user
CREATE TABLE OJETBLOG.TESTTABLE (OJETBLOGID NUMBER(5) NOT NULL, OJETBLOGDESC VARCHAR2(15) NOT NULL); -- Create the Table
COMMIT;
Create the user and the table

Create the user and the table

The output should read something like this:

Session altered.
User OJETBLOG created.
Grant succeeded.
Table OJETBLOG.TESTTABLE created.
Commit complete.

You can check that the user has been properly created and that the table has also been created in the appropriate schema.

Verification User and Table successfully created

Verification User and Table successfully created

Wrap up

We’re finally set to start working on our application. Next time, we’ll start modeling and implementing our Database, using the excellent Data Modeler. Stay tuned!

José Rodrigues, a.k.a. Maverick

 

Oracle JET Blog Series

Getting Started With Oracle JET and Oracle Cloud

Hi everyone,

We’ll kick off this year with a new blog series in Red Mavericks, devoted to a more pure development thread with Oracle development tools (broadly speaking).

I was most impressed with Oracle’s own JET MOOC, which provided with a nice introduction of the toolkit, and allowed me to clean up those spider webs from my programming background and returned to the good old keyboard bashing routine. This posed a significant difference from what I’ve been doing in the last 10 years, which were mainly filled with Workflow and BPM projects.

The Oracle JET MOOC also helped me return to a language I only grasped some 20 years ago… JavaScript. And since JavaScript is all the rage nowadays, it was the perfect excuse to (re)learn it using today’s programming patterns.

Finally, the Oracle JET MOOC ended up with a very important message: give something back to the community and help others! So it only seemed fitting that I would take some of my time to set up something that could help others that, like me, are not (or no longer) into programming, and particularly into JavaScript. So this new blogging series is born.

My goal is to publish a new article every 2 weeks, but if I manage to get a bit more time I’ll try to reduce it to a week’s interval.

The Oracle JET

Oracle JET is a toolkit released by (surprise…) Oracle, which addresses the need to build Enterprise applications in JavaScript. Its main focus is the frontend, with backend services being used mainly via REST web services.

As mentioned several times by Oracle, JET is not a framework, but rather a toolkit, a collection of frameworks that have been put together, tested and enhanced to develop and deliver high-quality enterprise applications. So JET is not a direct substitute for AngularJS or React. It uses its components, such as JQuery and KnockoutJS, to address the same needs as those two JavaScript frameworks.

Because the target is to build enterprise applications, JET incorporates thoroughly tested components that have been on the market for quite some time and are mature. This contrasts with the “Java Framework flavor of the week” approach, in which people adopt the newest framework because it’s the best thing since the invention of the wheel. Don’t get me wrong: going for the newest coolest stuff can be great. There are loads of applications in which using one of these newer frameworks can greatly reduce your work and deliver very good results. But, within organizations, there are several things to consider when choosing frameworks, such as how many resources are available in the market with that kind of knowledge (something that typically is very low on recent frameworks), what kind of community support is available, how do frameworks work with each other to accelerate development (and what side effects or bugs are there), etc… Organizations go for stability and maturity above all, because that ensures quality and risk reduction.

How to set things up

There are a few things you need to set up in order to start. We’ll guide you through each one of these steps. If you already have some of these items installed and setup, you can skip those steps. Here’s a list of everything you need to prepare:

  • Install Node.JS (version 5+)
  • Install the Oracle JET CLI (Command Line Interface)

Install Node.JS

The first thing you need to do is to install Node.JS. You need this to have the NPM installer, which is needed to install the Oracle JET CLI.

Go to nodejs.org download page, choose your OS and the LTS version, and download the respective pre-built installer. Then, run the installer, as shown in the images (these refer to the Windows version. Other OS may differ slightly).

Download Node from Nodejs.org site

Download Node from Nodejs.org site

 

Run the Node Installer

Run the Node Installer

 

Review and accept the License Agreement

Review and accept the License Agreement

 

Choose the installation folder

Choose the installation folder

 

Just leave the default options to be installed - Installs everything

Just leave the default options to be installed (Installs everything)

 

Proceed with the installation

Proceed with the installation

 

...Installation in progress...

…Installation in progress…

 

... and the installation is completed!

… and the installation is completed!

After the installation is completed, you can verify that everything is working according to the plan by using the following command in a command line window (Command Prompt or Powershell in Windows, Terminal window in MacOS or Linux), to verify the current version of nodeJS in your system:

prompt> node -v
v8.9.4

You can also check the Node Package Manager (NPM), which is what will guarantee the installation of Oracle JET and all its dependencies.

prompt> npm -v 
5.6.0

Done! On to the next part.

Install Oracle JET CLI (Command Line Interface)

We now have everything we need to install a very special tool that was made available with Oracle JET 4.0.0: the CLI. This tool will allow you to create and manage your Oracle JET projects, as well as help you through the entire development process. I could explain you all the details, but it’s better if I just show you hands-on how to use it and the benefits of it.

To install the Oracle JET CLI just run this on your command line or terminal:

 

prompt> npm -g install @oracle/ojet-cli

If you are behind a corporate proxy, you may need to set additional configurations to npm, so that it can fetch and install the necessary packages. These are:

prompt> npm config set proxy http://proxy.company.com:8080
prompt> npm config set https-proxy http://proxy.company.com:8080

After the Oracle JET Cli is installed, you can use the ojet command, which we will use to create, build and run our Oracle Jet projects. Try it out by typing ojet help. The outcome should be something like this.

prompt> ojet help
Oracle JET CLI
Synopsis:

ojet <command> [<scope>] [<parameter(s)>] [<options>]

Available commands:

add ........................... Adds platforms, plugins and more to a JET app

build ......................... Builds a JET app

clean ......................... Cleans build output from a JET app

create ........................ Creates a new JET app, custom theme, or component

help .......................... Displays command line help
Commands: [add|build|clean|configure|create|list|remove|restore|serve|strip]

list .......................... Lists platforms, plugins and more within a JET app

remove ........................ Removes platforms, plugins and more from a JET app

restore ....................... Restores missing dependencies, plugins, and libraries to a JET app

serve ......................... Serves a JET app to an emulator, device or the browser

strip ......................... Strips all non source files from a JET app

Detailed help:

ojet help <command> [<scope>]

You now have Oracle JET properly installed on your system! But before we go and get our hands dirty, there’s still one piece of the puzzle left… the IDE. Enter Netbeans!

Installing NetBeans IDE

You can, of course, use any text editor to create your javascript code. However, that’s the hard way.

The easy way is to use an IDE or text editor that offers some creature comforts, such as code completion, syntax highlighting. Oracle recommends the use of NetBeans IDE (and I do also), a free and open-source IDE, because it features already a tight integration with Oracle JET, making it easier for you to develop Oracle JET projects.

So, go on to the Netbeans site and download and install the latest version (at the time of writing, it’s 8.2). You’ll only need the HTML5/Javascript bundle, but as it is such a great IDE for development in other technologies, I opted for the “All” package.

Download your preferred bundle - I went for All

Download your preferred bundle – I went for All

Netbeans Installation - In my case, everything was already installed. Just follow the instructions.

Netbeans Installation – In my case, everything was already installed. Just follow the instructions.

Now you’re ready to start developing projects using Oracle JET. Hurray!

An Oracle JET application high-level architecture

So we now have almost everything we need to start developing our applications. Nevertheless, we need to take into account what’s the typical high-level architecture of an Oracle JET application. Oracle has a very good diagram that explains this quite well, as well as what is the Model-View-ViewModel architectural design pattern.

The Oracle JET Architecture

The Oracle JET Architecture

 

the Model-View-ViewModel (MVVM) architectural design pattern

The Model-View-ViewModel (MVVM) architectural design pattern

For now, just have these diagrams present in your memory. As we unfold and go through the next chapters of this blog series, all of this will become clearer and second nature to you, while developing your applications.

What will we be developing?

Through my years as a developer, and now in management, there was always the need to build some tools to help in the development process. Right now, Agile development seems to be the norm and although there are a lot of tools out there to help with this “methodology”, there are always somethings that don’t fit quite well with the way you do your stuff.

So, I decided to tackle that need with the development of an Agile Toolbox, which addresses my particular needs. All of the exercises and blog entries will revolve around building these tools, and I’ll cover everything, from the idea, through application design, requirements definition, development, building and testing, all the way up to packing and deploying the application in a cloud provider (Oracle Application Container Cloud Service?).

As of now, the tools that we’ll develop are:

  • Team Evaluation Tool – A tool that will allow a team member to evaluate the performance of his colleagues during a sprint. The application will get data about projects where the person has been involved, check the sprint team, and allow a quantitative evaluation (1-5), plus a top and bottom performer election.
  • User Story and Backlog creation – A tool to create user stories and manage them in terms of story lifecycle (written, in backlog, in development, in testing, released)
  • Planning Poker – A tool to allow planning poker on the user stories created. Will allow not only voting but also discrepancy discussion, etc…
  • Work Report – A reporting worksheet in which each team member will report the time they spent on a given User Story

We’ll  start with the Team Evaluation Tool in the next blog article.

The Server Side Stuff

Referring to the architecture diagram above, for our case, I decided to implement all Server Side logic and data using Oracle Database Cloud Service (DBCS). For me, it’s the easiest way to handle the data and associated business logic, while having the possibility to expose these as REST services, using ORDS (Oracle REST Data Services). Of course, I could set up a local database for this, but it wouldn’t be as much fun and it’s a way for some of you guys to start your journey in the (Oracle) cloud. We’ll set up a trial account and guide you through each step in the next article of this blog series.

That doesn’t mean that we’ll be implementing everything we need directly in the browser (although we could). We’ll also use the Oracle SQLDeveloper. including the SQL Data Modeler, to plan and design our application data as well as some business logic.

You can download it from here. There’s no installation process. You just uncompress it to a directory and run directly from it. You’ll need an up to date Java Runtime to run it, though.

Additional Resources

Before we wrap up the first blog article in the series, I would like to recommend a set of additional resources that will prove valuable throughout this adventure.

Wrap up

So we set up most of the tools we’ll need to start our adventure with Oracle JET and we established what we’ll be doing in the next weeks/months. I hope you’ll continue this journey with us and follow our (bi-)weekly articles, building your own tools or the ones we’re proposing.

All code will be available on Github. Everyone is welcome to contribute to it.

So, until next week.

José Rodrigues, a.k.a. Maverick

This post has been cross-posted by Link Consulting. Check out their other articles at http://www.linkconsulting.com/oracle.

Calendar- Date Time in PCS - Header

Dealing with Dates and Times in Oracle Process Cloud Service

Introduction

Oracle Process Cloud Service (PCS) is great! You can build process-based applications in two shakes of a lamb’s tail, much quicker than most Low Code platforms in the market. Of course, you can’t, or at least shouldn’t, develop any “normal” master-detail CRUD application with Oracle Process Cloud Service. If you need this kind of applications perhaps Oracle Application Builder Cloud Service will suit you better, but for process-centric applications, it’s a hard to beat tool.

As you may know, Oracle also has an on-premise product, called Oracle BPM, which features a similar codebase, but it features a more advanced and complex UI and it takes a bit more time to produce a similar application.

Oracle PCS really shines for its simplicity and ease of use, because the UI was streamlined and is much more focused. However, all this optimization and streamlining led to decisions to simplify the UI and some features present in Oracle BPM are not present in Oracle PCS. Most of them we won’t really need except for 1% of all our application needs, but a few are a more common necessity. The capacity to use functions to manage and operate dates and times fit this last set.

Oracle BPM allows you to manipulate dates using several options, with Data Associations and Script tasks perhaps being the most common. In Oracle PCS, the Data Associations don’t allow you manipulate dates nor retrieve the current Date / Time, and script tasks plain and simply are not available.

It’s possible to create services that do whatever we need to do with dates and then call them in Oracle PCS, but sometimes we want a more direct approach.

The Use Case

Let’s consider the following case:

We have a simple approval process with 3 steps. Every time there’s a response to a task, we want to record which response was and at which date and time it was made. We also want to show this information to the user, in the task web form, and we want that the whole process takes a calculated amount of time as the most, with it automatically finishing up after that amount.

Something a bit like this:

The Use Case process

The Use Case process

And the web form:

The Use Case web form

The Use Case web form

 

Notice that Oracle PCS automatically creates a data object in the process, of the same type as the start form.

Automatically created data object corresponding to our web form

Automatically created data object corresponding to our web form

Also, all task data associations are done automatically, without the need to do this manually, when you set the task form.

Choose the web form for the task to use

Choose the web form for the task to use

Automatic Data Associations - Cool Stuff!

Automatic Data Associations – Cool Stuff!

Remember that we want to fill each of the decision fields with both the Outcome of the last response and the date and time of that response.

If we try to so it through the web form functions, you’ll notice that you can only set a field to the current date/time. This would change the value of the field “When?” every time the user was shown the form. This is not what we want.

Trying to automatically fill this field

Trying to automatically fill this field

... and you can. But that's not what you want.

… and you can. But that’s not what you want.

Also, you can’t do calculations between fields which are of type date/time. Let’s say you also want a new field which tells you the number of days between the date of the first decision and the last one. How can you calculate this in the form?

Common sense would lead us to try to subtract the value of the field that holds the date/time of the first decision, from the value of the field that holds the date/time of the last decision. Unfortunately, this doesn’t work, as you can see below (31 May – 18 May = 0)

Our form doesn't calculate the Nr of days correctly

Our form doesn’t calculate the Nr of days correctly

The Nr of days calculation function

The Nr of days calculation function

We can also try the data association route, but, as we can see from the image below, there are no functions to deal with dates in this part.

Data Associations Expression Builder - Data Objects

Data Associations Expression Builder – Data Objects

Data Associations Expression Builder - Operators

Data Associations Expression Builder – Operators

Finally, as there are no Script Tasks, as you have in Oracle BPM, you can’t actually program this in your process.

So, how can we do this?

We cheat!

Let’s try to use a Business Rule for this. By placing a Business Rule after each task, we guarantee an immediate execution right after the decision is taken and we record the rule execution date/time as the decision date.

Business Rules Calendar objects

Business rules get an input, do some magic, and produce an output… pretty much as any other task in our process. What makes them special is the amount of magic one can do inside them.

When we create a decision, we can use a lot of Java functions, including a few related to treating dates and times.

So we create a Business Rule, setup the input and output as objects of the type of the web form (this makes it easier to do the data associations) and, once we enter the decision screen we just create a new General Rule.

Creating your General Rule

Creating your General Rule

Then we create a true condition in the “If” part of the rule, as we want it to always execute, and in the “Then” part, we specify that we want to modify the object that we specify in the Drop Down List (in the example below the object is called “Other”).

Our Rule Configuration

Our Rule Configuration

We click on the pencil icon and choose the field in which we want to do the date processing (click on the magnifying glass). We are taken into the Condition Browser, where we can access the Expression Builder

Condition Browser

Condition Browser

Business Rules Expression Builder

Business Rules Expression Builder

As you can see, this expression builder is much more advanced than the one available in the Data Associations dialog, with a few more tabs, including the Functions.

In here we can do calculations with our dates, set them as we want.

So we’ll grab the CurrentDate.date.time calendar and apply the JavaDate.to datetime string to it, like so:

Our Date expression

Our Date expression

And that’s it. This will calculate the current date and time, put it in the output object, which is our web form.

Now we only need to repeat for all tasks. A bit like this:

Our Use Case process with Business Rules

Our Use Case process with Business Rules

I hope that this article can help you do more complex calculations and extend the normal use of Oracle PCS, to cope with more business scenarios.

If you have any questions please place them in the comments box below.

José Rodrigues

P.S: All of this could also be done with the help of ICS, JCS or ACCS. I’ll write about it in a couple of weeks.

Post header image by Sebastien Wiertz

Building Case Management Patterns in PCS

Implementing Case Management Patterns using Oracle Process Cloud Service (PCS)

Hi everyone and welcome to the second part of our article on implementing Case Management (CM) patterns with Oracle Process Cloud Service (PCS).

On the first part, we learned a bit about the concepts around Case Management and we (barely) started a process on PCS. We’ll use this process now as a container to implement the ad-hoc nature patterns.

The First Rule of Fight Club

First Rule Fight Club

The First Rule of Fight Club © 20th Century Fox

“We do NOT try to implement Case Management on PCS!” – That’s the first rule. What we will do is implement a small subset of behaviors, which will offer some of the advantages of Case Management.

The second rule is that we have two buddies that can help us in this quest: Database Cloud Service (DBCS) and Integration Cloud Service (ICS). Some of the behaviors will need a bit of persistence, which implies placing a lot of case metadata in some control tables, hence the DBCS, with the ICS being used to handle all integrations. ICS may not be necessary but makes integration easy as peach. Use them extensively!

As we’re trying to “hammer a screw”, things will not be pretty. This is a workaround. Please take it as a way to implement these behaviors.

Regarding the first rule, the set of behaviors which we’re going to implement is the following:

  • Ad-hoc Task and Process Invocation
  • Milestone and Stage Trigger/Set
  • Event Listeners

Let’s Start

So, last time we created a message based process

Again, the process must be created with messages events, as these will allow it to be called by other cases in an asynchronous way.

The first thing we do is get a case ID. This ID should come from another system (for instance, a database) and will be used to guarantee correlation between all elements of the case. We’ll get into to that further ahead.

Then, what we typically do is set up a business rule (decision table) or a database table, in which we predefine some configurations, such as Overall Process SLA, Milestone SLA’s, etc… This will allow you to change the way a process/case behaves, without actually having to change the process model.

Centralized Control – The Case Main UI

Now, we’re going to need a UI, from which we should be able to invoke whatever actions I want, in the order that I want them.

For that, we’ll build a Human Task and implement the respective form, using the Web Form technology. In fact, you can build your UI in whatever technology you want and then use the REST API’s to perform the task actions, but in this case, we did it with the web forms.

Here is an example of a Custom UI, using Oracle’s Alta UI, and delivered by Link Consulting’s BPM Framework (Very cool product! Ask for a demo at linkconsulting.com/oracle :))

No matter what technology you use, you should guarantee that it contains the following information:

  • All business relevant data
  • Access to documents to view, upload or download.
  • A list of all available milestones, their status and moment in which the last status was set
  • A list of all previous actions in the case, including the famous “Who did what and when?”
  • A selector of all possible actions available at that moment
    • Typically you’ll want to restrict the actions you can do at any moment, based on the actions performed before and the milestones reached, i.e. the case state
    • The list of possible actions is determined through one of two options:
      • either we implement this logic outside and then invoke it as a web service, for instance in the database
      • or we implement it by using Business Rules.
    • In both cases, the set of possible actions must be determined prior to entering the Human Task

Main Case Loop Handler

The idea of having a centralized control is complemented with a loop handler. This will ensure that the process keeps coming back to the Case Main UI with each action taken.

We do this by creating a container, in this case, an embedded subprocess and then placing both the activity that determines the allowed actions and the Case Main UI. Then you evaluate the action taken and the case state, determining if the case is now resolved, canceled or if it still continues, looping back to the embedded subprocess in this case.

With each loop iteration, and because this is a case, we must check if the changes in the case state or milestone state will trigger automatically any actions. For instance, resolving a dispute in customer service by offering the customer some money should trigger automatically a bank transfer and a notification by e-mail both to the customer and to the support supervisor.

 

So we need to implement this verification into the Main Case Handler. First, we validate if the last iteration generates any automatic action. We do this by invoking a service that implements this logic. In our example, we did the validation in the Database, through a stored procedure that checks the case data and understands if any automatic action is to be taken. Then we have a branch: If there are any automatic actions to be taken, the process just takes them. If not, the process collects the user allowed actions based on the current case state and them spawns a new instance of the human task Case Main UI.

Do notice that the automatic actions may change the case state or milestone state, in which case there may be further automatic actions to be taken, and so on. This may lead to infinite loops, so be careful when setting the conditions that lead to automatic actions.

The Case Action Controller

After the process/case determines that an action is to be taken, either automatic or manual, we then need to take it in such a way that the process/case doesn’t stop there waiting for the action to be completed. In a normal case management scenario, a user can trigger multiple new tasks without needing to finish the previous ones. So we need to allow multiple actions to run in parallel.

Also, we may want to have some restrictions on how a given task is invoked. Let’s say that, for instance, you want to ask your supervisor for his approval on a compensation for a customer. So you trigger the Action “Superior Approval”. Now you want to be able to do other actions, for instance, to contact the customer, but you don’t want the system to allow the user to trigger another “Superior Approval” action until the last one was completed.

This is what your Case Action Controller does. It controls the actions that can be taken and in what conditions it can take them.

Right now you’re probably asking “Huuummmm, but isn’t that what the ‘Get Automatic Actions’ and the ‘Get Allowed Actions’ do?” Well, kind of… We use the case main controller to do a few specific things:

  • Control the cardinality of the execution of actions – This is done with a Service Task
  • Allow parallel tasks to happen – We use an inclusive gateway, and we model all possible actions inside it.
  • Set Case Stage status – This is done with a Service Task

Our Case Action Controller looks like this, in a very simplified way

Notice that we use Throw Message Events to trigger the actions. This allows the process/case to trigger this, but then immediately follows to the end of Case Action Controller, without waiting for that action to be concluded. This is how we implement the semantics of an ad-hoc invocation of a process or task, as many times we want, in the order that we want, and not using a predefined, well-structured process.

Implementing our tasks and processes

To implement our tasks and processes that are to be called inside our Case Action Controller, we use the exact same pattern. A task is implemented with a message-based process that only has that task in it.

For processes is exactly the same, but instead of having a single task inside the process, the whole process logic is in it.

Milestone and Stage Trigger/Set

In regards to updating the Milestone and Stage states, we do it through web service invocation and directly update the underlying database. We build a table for Milestones and one for Stages, which include who did what and when. Every time we need to update a Milestone or a Stage state, we just send it to the DB, through the use of REST web services (DBCS features ORDS). This is done by placing service tasks where we want them to be updated.

Updating Milestones and Stages is the way we can understand how a case is progressing and what actions can be taken at any moment, manual or automatic.

Event Listeners

There may be times in which you would want to listen to events and trigger a case activity based on those events. This is implemented in CMMN with the use of Event Listeners, and are represented like this.

The first one is a human event listener. In this case, it triggers a task called “Request Help from Colleague”. In PCS we implement this behavior as we mentioned before: invoking a message based process that only has this task inside. It’s like invoking an Ad-Hoc task.

The second one is a timer event listener and it’s a bit trickier to implement because PCS doesn’t offer that many tools to calculate dates and times. Sometimes you would want to do date/time calculations, and PCS isn’t very cooperative in this subject. Comparing with the on-premise product (Oracle BPM), it’s very limited. There are workarounds for this (think of business rules), but we should try and avoid them.

What we typically do is we create a boundary timer event and attach it to either the Main Case embedded subprocess or a given task, and set the timer using the interval condition.

In the case shown here, the timer is attached to the embedded subprocess in a non-interrupting way. When it triggers, it just sets a given milestone. As the event is non-interrupting, the execution stays where it is.

Putting it all together

Adding all of this into a PCS application will render your case-like processes. We built an example on a small project to handle baggage damage claims for airline and airport handlers. The corresponding CMMN model was something like this:

Complaint CMMN model

Complaint CMMN model

And the corresponding PCS main process was this one.

Case Management using PCS - Part II - 14

PCS implementation of the case

Conclusions

So we hope this article helps you build some less-structured processes using Oracle Process Cloud Service, and that the limitations of not having case management (at least at the moment of publication) may be overcome, to deliver more value to your organizations.

As always, feel free to reach us with your comments and questions. We’ll try to answer that as quickly as possible.

More articles are in the pipeline and you’ll definitely hear from us in the next few weeks.

Until then

Maverick (José Rodrigues)

This is a cross post with Link Consulting. For more Oracle Middleware related posts, please visit http://www.redmavericks.com

post header image by Alden Jewell

Naming Conventions - part 3

ADF Namings Conventions – Part III

Hi all,

In my previous post ADF Namings Conventions – Part II I have focused my attention on:

  • Model & View Controller Project

Today I will focus on:

  • Task Flows
  • Templates
  • JSF, JSFF, JSPX
  • JAVA Events
  • JAR, WAR, EAR files

 

Task Flows

In this section we will provide the namings related with task flows.

 

Task Flow Namings

 

The name for Task Flows should be defined as follows:

<TASK_FLOW_CAMEL_CASE> + TF

Example: myTaskFlowTF

We may have different task flows types, each one built to address one purpose. For these cases we added a constant to the name in order to easily recognize their purpose/target. Next table presents the Types and Target Names:

Type Task Flow Target Name
 Task Flow to filter data  <TASK_FLOW_CAMEL_CASE> + Filter + TF
 Task Flow to perform actions on data <TASK_FLOW_CAMEL_CASE> + Action + TF
 Task Flow to list data <TASK_FLOW_CAMEL_CASE> + List + TF
 Task Flow to detail data <TASK_FLOW_CAMEL_CASE> + Detail + TF
 Task Flow to combine multiple task flows <TASK_FLOW_CAMEL_CASE> + Container + TF

 

Task Flow Managed Beans

 

Task Flows’ managed beans are responsible for managing data. Multiple managed beans can be created for a single Task Flow. Nevertheless you usually have one main managed bean. For these cases managed beans should have a similar name as the task flow.

As you may have already noticed, you are free to provide any name you want to the Java Class you assign to your managed bean in “Managed Beans” task flow’s tab. For this scenario we recommend to use the same name of the Java Class.

By following this two advises you will be able to find what you are looking more easily without losing time and effort to understand the mappings made by each developer. This is very important for developers during development and maintenance phases. Based on these assumptions take a look on the following example:

 Task Flow Name:  myTaskFlowTF
 Java Class Name:  MyTaskFlow
 Managed Bean Name:  MyTaskFlow

 

Task Flow Input & Return Parameters

 

Task flow’s input parameters should be prefixed with “in”:

in + <CAMEL_CASE>

Example: inMyParameter

Task flow’s return parameters should be prefixed with “rtn”:

rtn + <CAMEL_CASE>

Example: rtnMyParameter

 

Templates

ADF lets us to create different types of templates to abstract common functionalities used in our projects, for example: Page Templates and Task Flow Templates. This templates should be created using the following pattern:

<CAMEL_CASE> + Template

Example: myTaskFlowTemplate, myPageTemplate

 

JSF, JSFF, JSPX Namings

Pages names should have a self-explanatory name in order to be easily identified and recognized with its purpose. The default pattern we followed was:

<CAMEL_CASE>

 

In Task Flows you may have multiple pages depending on the route it takes, but for the main page (if we have one) we should provide it with the same name as the task flow but without ‘TF’ suffix.

Example:

 Task Flow Name: myFinantialTasksListTF
 Page Name: myFinantialTasksList

 

Java Events Namings

For our controls events we set suffixes for them. This will help you to understand the event type you are taking care without the need to go directly to the page, only if you need to understand it in more detail. The list of suffixes for each type of event are listed in the following table:

Event Type Suffix
 Action Event Action
 Value Change Listener VCL
 Selection Event Listener SEL
 Client Event Listener CEL
 Return Event Listener REL
 Action Event Listener AEL

 

JAR, WAR, EAR Files

Building and deploying projects lead us to create deployment profiles. In this deployments profiles you should use the same namings. The next table addresses this topic by providing the namings for each type of deployment profile.

File Type Prefix
 JAR jar + <PROJECT_NAME> + <MODULE_NAME>
 ADF JAR Library adflib + <PROJECT_NAME> + <MODULE_NAME>
 Shared Library sharedlib + <PROJECT_NAME> + <MODULE_NAME>
 WAR war + <PROJECT_NAME> + <MODULE_NAME>
 EAR ear + <PROJECT_NAME> + <MODULE_NAME>

 

You can find the PDF with these series of posts right here.

I hope these series of posts may help you in your projects 🙂

 

Cheers,

Pedro Gabriel

@PedrohnGabriel

This is a cross post with LinkConsulting. For more Oracle Middleware related posts, please visit http://www.linkconsulting.com/oracle

Post Header photo by Geoffrey Fairchild

Case Management using PCS

Case Management Patterns using Oracle Process Cloud Service

Hi and welcome to a new article on Oracle Process Cloud Service (PCS).

This time, we’re going to address some use patterns that may seem difficult to implement using PCS, and tackle the need for unstructured parts of the process, which is to say, parts of the process that can’t be previously modelled because, well… we don’t know how they’ll turn out.

Take for instance a complaint to your customer service department. You’ll never know, in advance, what kind of complain it will be, or if you need one, two, five or fifty interactions with the customer, or if you need to get approval from department A or B to try and compensate the customer, or even if any legal action will be needed with a supplier of yours, after they failed to compensate the complainer in due time.

So, you see, there are some elements that may render part of your process impossible to predict, at least in terms of activity sequence. You know that these may take place at some point in time in the process, but you can’t plan ahead and model the exact activity sequence (“A” will happen after “B”).

To handle this type of less structured processes (I don’t like the term “Unstructured”, because they have a structure), there’s a discipline called “Case Management” (CM). CM handles the choreography of this type of processes, called Cases, guaranteeing that the activities that are part of the process are executed at the right time and when conditions permit it.

For the remainder of the article, please consider the terms “Case” and “Process” as interchangeable, the term “Less structured Process” as equivalent to Case, and the term “Structured Process” as equivalent to a predefined flow-controlled Process (BPMN process or equivalent).

The main idea behind CM is that instead of the process model determining the next action to be taken, it’s the worker who, actually decides what should be the next best action to perform in each situation, using his experience.

This is not to say that the worker can just do any activity at any time. Typically, there are specific business rules that enable or disable a given activity based on the current data and events associated with a specific process. However, these rules can be as tight or as flexible as we may need.

Case Management Patterns

The idea of the article is to give you the tools you need to implement Case Management patterns using Oracle PCS. This is not to implement Case Management in PCS, but just some case management behavioral patterns. Parts of what is “Case Management” will not be addressed in any way, but things like Ad-hoc process/task calls will, and this is sufficient for most needs.

Short Summary of Case Management Modelling Notation (CMMN)

Just as we have BPMN as a standard way to model structured business processes, OMG (Object Management Group) also defined a standard to model Cases, called Case Management Modelling Notation, or CMMN.

In a short summary, the most meaningful objects are:

  • Case – It’s our main object process all other objects are inside this one.
  • Stages – You can see it as a group of activities that represent a business concept.
  • Tasks – represents work to be done. Can be Human tasks, Process Tasks (call a structured process) or Case Tasks (call other cases). These can be mandatory or optional.
  • Events – Represents something that happens that is significant for the case: e.g. a new file is added to the case, or someone who applied for a loan at the bank suddenly dies.
  • Sentries – Criteria that need to be verified in order to instantiate or complete a task.

As with BPMN, you combine these objects to model the case behavior graphically.

Complaint CMMN model

Complaint CMMN model

For an introduction to CMMN, I suggest the following references:

  • Case Management Modeling and Notation – Knut Hinkelmann

http://knut.hinkelmann.ch/lectures/bpm2013/12_CaseModeling.pdf

  • Introduction to the Case Management Model and Notation (CMMN) – Mike Marin

https://arxiv.org/pdf/1608.05011.pdf

  • There’s an excellent book in the subject. You can find it in the link below.
    • Oracle Case Management Solutions – Léon Smiers, Manas Deb, Joop Koster, Prasen Palvankar

https://www.crcpress.com/Oracle-Case-Management-Solutions/Smiers-Deb-Koster-Palvankar/p/book/9781482223828

First Steps

The first rule is that we need to be able to start/invoke cases whenever we want, from whichever channel we want.

As we’re going to use PCS for this, we first need to create a new Application

Create New Application - Step II

Create New Application – Step II

Let’s call it “Complaint Case”.

Create New Application - Step II

Create New Application – Step II

Now, we create a new process in it. The process needs to start and end with a message event. This is what will allow us to reuse this when we want. Please, do not use None or Form events to start it.

Create New Application - Step III

Create New Application – Step III

Create New Application - Step IV

Create New Application – Step IV

In the next article

In the next article, we’ll address specific implementation patterns in Process Cloud Service and the setup of the data structure that facilitates Case behavior.

Until then, have a great week!

Maverick (José Rodrigues)

This is a cross post with LinkConsulting. For more Oracle Middleware related posts, please visit http://www.linkconsulting.com/oracle

Post Header photo by doug rattray