How to find out the RTP values for a user-initiated calculation


Have you ever wondered how to create a document that details which calc manager rules we’re executed by the user and what RTPs they used?

If you never did, a friend of mine did. He asked help on this on a private group, and I believe this was before 19.06 Cloud release. The idea was to write a DATAEXPORT script with the RTPs to extract a single line of data, that contains the RTP values.

I did think about writing this up. However, I was working with the competition, and I was not able to write it at that time.

Fast forward to yesterday (10/09), where I’m trying to write this blog post, and I was trying to find a better way to do this. I did look at the REST API functions and was trying to see if I can use a search query on jobNames with the “Job Status” REST API call.

However, the only query option there is to query for jobTypes. I was disappointed. However, I was not going to give up. I decided to talk to Sree Menon, and after a brief conversation, he suggested to look at the Export Job Console REST API function.

Export Job Console got introduced in the 19.05 version. I had to look at the ever helping, all answering John Goodwin’s blog post to find the endpoint for this API call.

Armored with all the information, this blog post was born.

How it works

Every job that gets executed in the Oracle EPM Cloud receives a job id. You need the job id to get the status/information of each job, and that is the reason why I was trying to query jobs with jobName since the id is not visible within the UI.

When you export the job console entries, you do get the following details.

  • Job Id
  • Parent Id
  • Job Name
  • Job Type
  • Run Status
  • Start Time
  • End Time
  • User Name
  • Parameters
  • Attribute1
  • Attribute2
  • Details

We now have the job id. What about the parameters (RTPs) and their values? I don’t have that in the export. Now, this is where the export job console API got an issue. If you add a parameter to export the job console details with a job Name as shown below.

The parameter details are not exported. DEV team did acknowledge this as an issue and will be fixed in 19.12 release.

Some people might prefer this the way it is; you are asking for a specific Job Name, so why we need child jobs. I guess that can be still satisfied if you add another filter for jobTypes to ignore the child jobs. It is a win-win 🙂

I did go ahead and create a groovy script to help the rest of the world, till you get the fix.

Groovy in action

I did make use of PbcsRestClient.groovy in my groovy rule.

I did make some changes (not a lot) to RestClient groovy script to fix an issue and to make it groovier. If you have used this version, you might have noticed it complaining about a variable called context. Here is how you fix it.

before change

You move line 66 to line 64. (declare context before it gets called)

after change

Another update that I did was to update the displayServerResponse function to close InputStream and FileOutputStream.

before change
after change

The next change I did was to update the variable basicAuth in the openConnection function to make it groovier. You don’t have to do this at all.

before change
after change

Finally, I did add some import statements and some grab statements to support the functions I added.

I did only update the functions that I used :). Another thing I noticed was download file function in the PbcsRestClient works only with 11.1.2.3.600 version. It won’t work with v3 or v1 versions. It took a long time to figure that out 🙁

Once I run the function, I added to the export job console to the PbcsRestClient; I can then download the file to my local PC. Once I export the file, an unZip process happens, and it gets loaded to an in-memory Apache Derby database. Once it is in the database, I can then run a query against that database and create a report from it.

When you execute the export job console, you can specify appName, jobTypes, fileName, nDays. I did make it future-ready to include jobNames; however, you cannot specify the jobNames now.

exportJobConsole

Here is the full REST API groovy script to create a report on who ran a rule with its parameters and values.

package com.oracle.ceal
@Grab('org.apache.derby:derby:10.9.1.0')
@Grab('commons-io:commons-io:2.6')
@GrabConfig(systemClassLoader=true)

import javax.net.ssl.HostnameVerifier
import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLContext
import javax.net.ssl.SSLSession
import javax.net.ssl.TrustManager
import javax.net.ssl.X509TrustManager

import java.net.HttpURLConnection

import java.util.regex.Pattern
import java.util.regex.Matcher
import java.util.zip.ZipEntry
import java.util.zip.ZipFile

import static java.sql.ResultSet.*

import java.nio.file.Files
import java.nio.file.Paths

import org.apache.commons.io.input.BOMInputStream

import groovy.json.JsonSlurper
import groovy.sql.Sql

class PbcsRestClient {
    private HttpURLConnection connection
    private def planningUrl
    private def interopUrl
    private def proxyHost
    private def proxyPort
    private def user
    private def pwd
    private def domain
    private def ignoreSSLCertsErrors

    public PbcsRestClient(planningServerUrl, interopServerUrl,httpProxyHost, httpProxyPort, identityDomain,username, password, ignoreSSLCertificationPathErrors) {
        planningUrl=planningServerUrl
        interopUrl=interopServerUrl
        proxyHost=httpProxyHost
        proxyPort=httpProxyPort
        domain=identityDomain
        user=username
        pwd=password
        ignoreSSLCertsErrors=ignoreSSLCertificationPathErrors

    }

    def setProxyParams() {
        Properties systemProperties = System.getProperties()
        systemProperties.setProperty("http.proxyHost",proxyHost)
        systemProperties.setProperty("http.proxyPort",proxyPort)
        systemProperties.setProperty("https.proxyHost",proxyHost)
        systemProperties.setProperty("https.proxyPort",proxyPort)

    }

    def setSSLParams() {
        if (ignoreSSLCertsErrors !=null && ignoreSSLCertsErrors.toUpperCase()=="TRUE") {
            println "Ignoring SSL certification path errors"
            // Disable SSL cert validation

            def hostnameVerifier = [
                verify: { hostname, session -> true }
            ]
            def trustManager = [
                    checkServerTrusted: { chain, authType -> },
                    checkClientTrusted: { chain, authType -> },
                    getAcceptedIssuers: { null }
            ]

            HttpsURLConnection.setDefaultHostnameVerifier(hostnameVerifier as HostnameVerifier)
            SSLContext context = SSLContext.getInstance("SSL")
            context.init(null, [trustManager as X509TrustManager] as TrustManager[], null)
            HttpsURLConnection.setDefaultSSLSocketFactory(context.getSocketFactory())


        }
    }

    def openConnection(restUrl,method,localFileNameWithPathForStorage) {
        println "Opening connection to $restUrl with method:$method"
        int statusCode

        setProxyParams()
        setSSLParams()

        URL newUrl
        newUrl=new URL(restUrl)

        connection = (HttpURLConnection) newUrl.openConnection()

        connection.setDoOutput(true)
        connection.setDoInput(true)
        connection.setUseCaches(false)
        if (method=="")
            connection.setRequestMethod("GET")
        else
            connection.setRequestMethod(method)

        connection.setRequestProperty("Content-Type","application/x-www-form-urlencoded")

        String basicAuth = "Basic ${javax.xml.bind.DatatypeConverter.printBase64Binary("$domain.$user:$pwd".bytes)}"
        connection.setRequestProperty("Authorization", basicAuth)

        String response=""
        try {
            statusCode = connection.responseCode
            println "Connection status code: $statusCode "
            if (statusCode==401) {
                println "Not authorized"
            }
            if (statusCode==200) {
                println "Authentication succeeded"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,localFileNameWithPathForStorage)
                println "-----"
            }
            if (statusCode==400) {
                println "Bad request"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,"")
                println "-----"
            }
        } catch (Exception e) {
            println "Error connecting to the URL"
            println e.getMessage()
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }

        return response
    }

    def displayServerResponse(connection,localFileNameWithPathForStorage) {
        InputStream is;
        if (connection.getResponseCode()==200) {
            is=connection.getInputStream();
        } else {
            is=connection.getErrorStream();
        }
        println "Response Content-Type:"+connection.getContentType()
        if (connection.getContentType().contains("application/json")) {
            BufferedReader br = new BufferedReader(new InputStreamReader(is));
            StringBuilder sb = new StringBuilder();
            String line;
            while ((line = br.readLine()) != null) {
                sb.append(line+"\n");
            }
            br.close();
            println sb
            return sb.toString()
        } else {
        if (connection.getResponseCode()==200) {
        //storing content
            final int BUFFER_SIZE = 5 * 1024 * 1024;
            def fileExt = connection.getHeaderField("fileExtension");
            println "Downloading file with fileExtension header:"+fileExt
            if (fileExt!=null) {
                def saveFilePath = localFileNameWithPathForStorage;
                File f = new File(saveFilePath);
                is = connection.getInputStream();
                FileOutputStream outputStream = new FileOutputStream(f);
                int bytesRead = -1;
                byte[] buffer = new byte[BUFFER_SIZE];
                while ((bytesRead = is.read(buffer)) != -1) {
                    outputStream.write(buffer, 0, bytesRead);
                }
                println "Downloaded file to $localFileNameWithPathForStorage"
                outputStream.close()
                return localFileNameWithPathForStorage
            } else {
                println "Could not find fileExtension header"

            }
        }
        }
        is.close()
        return ""
    }

    def listFiles() {
        println "**Listing files**"
        def restUrl=interopUrl+"/applicationsnapshots"
        def response

        response=openConnection(restUrl,"GET","")
        println "****"
    }

    def deleteFile(serverFileName) {
        println "**deleting file**"
        def restUrl=interopUrl+"/applicationsnapshots/"+serverFileName
        def response

        response=openConnection(restUrl,"DELETE","")
        println "****"
    }

    def getJobStatus(appName,jobId) {
        println "**get Job status**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs/" + jobId

        def response
        response=openConnection(restUrl,"GET","")
        println "****"
        return response
    }

    def exportData(appName,jobName, exportServerFileName) {
        println "**Exporting data**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=EXPORT_DATA"
        if (exportServerFileName!="") {
            def exportFileJSON="{exportFileName:$exportServerFileName}"
            restUrl=restUrl+"&parameters=" + exportFileJSON
        }

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
        return response
    }

    def getJobIdFromJSONResponse(response) {
        def jobId=""
        try {
            Pattern regex = Pattern.compile("\"jobId\":\\d+");
            Matcher matcher = regex.matcher(response);
            while (matcher.find()) {
                jobId = matcher.group(0).replace("\"jobId\":","");
            }

        } catch (Exception e) {
            println "No jobId found in server response"
        }
        return jobId
    }

    def downloadFile(serverFileName,localFolderForStorage) {
        println "**Downloading file**"
        def restUrl=interopUrl+"/applicationsnapshots/"+serverFileName+ "/contents"
        def response
        response=openConnection(restUrl,"GET",localFolderForStorage+"/"+serverFileName)
        println "****"
    }

    def exportMetaData(appName,jobName, exportServerFileName) {
        println "**Exporting metadata**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=EXPORT_METADATA"
        if (exportServerFileName!="") {
            def exportFileJSON="{exportZipFileName:$exportServerFileName}"
            restUrl=restUrl+"&parameters=" + exportFileJSON
        }

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def executeLCMExport(snapshotName) {
        println "**Exporting snapshot**"
        def typeExport="{type:export}"
        def restUrl=interopUrl+"/applicationsnapshots/"+snapshotName+ "/migration?q="+typeExport
        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def executeLCMImport(snapshotName) {
        println "**Importing snapshot**"
        def typeImport="{type:import}"
        def restUrl=interopUrl+"/applicationsnapshots/"+snapshotName+ "/migration?q="+typeImport
        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def runBusinessRule(appName,jobName, JSONRuntimePrompt) {
        println "**Running business rule**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=RULES"
        if (JSONRuntimePrompt!="") {
            // Example for JSONRuntimePrompt {Period:Q1,Entity:USA}
            restUrl=restUrl+"&parameters=" + JSONRuntimePrompt
        }

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def runRuleSet(appName,jobName) {
        println "**Running rule set**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=RULESET"
        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }


    def cubeRefresh(appName,jobName) {
        println "**Refreshing cube**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=CUBE_REFRESH"

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def runPlanTypeMap(appName,jobName, clearData) {
        println "**Running map (job of type plan_type_map)**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=PLAN_TYPE_MAP"
        if (clearData!=null && clearData.toUpperCase()=="FALSE") {
            restUrl=restUrl+"&parameters={clearData:false}"
        } else {
            println "Clear data is set to true (default)"
        }
        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def importData(appName,jobName, importFileName) {
        println "**Importing data**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=IMPORT_DATA"
        if (importFileName!="") {
            def exportFileJSON="{importFileName:$importFileName}"
            restUrl=restUrl+"&parameters=" + exportFileJSON
        }

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def importMetaData(appName,jobName, importZipFileName) {
        println "**Importing metadata**"
        def restUrl=planningUrl+"/applications/"+appName+"/jobs?jobName=" + jobName + "&jobType=IMPORT_METADATA"
        if (importZipFileName!="") {
            def exportFileJSON="{importZipFileName:$importZipFileName}"
            restUrl=restUrl+"&parameters=" + exportFileJSON
        }

        def response
        response=openConnection(restUrl,"POST","")
        println "****"
    }

    def uploadFile(localPath,fileName) {
        println "**Uploading file**"
        def restUrl=interopUrl+"/applicationsnapshots/"+fileName

        final int DEFAULT_CHUNK_SIZE = 50 * 1024 * 1024;
        int packetNo = 1;
        boolean status = true;
        byte[] lastChunk = null;
        File f = new File(localPath+"/"+fileName);
        InputStream fis = null;
        long totalFileSize = f.length();
        boolean isLast = false;
        Boolean isFirst = true;
        boolean firstRetry = true;
        int lastPacketNo = (int) (Math.ceil(totalFileSize/ (double) DEFAULT_CHUNK_SIZE));
        long totalbytesRead = 0;
        try {
            fis = new BufferedInputStream(new FileInputStream(localPath+"/"+fileName));
            while (totalbytesRead < totalFileSize &amp;&amp; status) {
                int nextChunkSize = (int) Math.min(DEFAULT_CHUNK_SIZE, totalFileSize - totalbytesRead);
                if (lastChunk == null) {
                    lastChunk = new byte[nextChunkSize];
                    int bytesRead = fis.read(lastChunk);
                    totalbytesRead += bytesRead;
                    if (packetNo == lastPacketNo) {
                        isLast = true;
                    }
                    status = sendRequestToRestForUpload(restUrl,isFirst, isLast,lastChunk);
                    isFirst=false;
                    if (status) {
                        println "\r" + ((100 * totalbytesRead)/ totalFileSize) + "% completed";
                    } else {
                break;
                }
                    packetNo = packetNo + 1;
                    lastChunk = null;
            }
            }
        } catch (Exception e) {
            println "Exception occurred while uploading file";
            println e.getMessage()
        } finally {
            if (null != fis) {
            }
        }
        println "****"
    }

    def sendRequestToRestForUpload(restUrl,isFirst, isLast,lastChunk) {

        def url=restUrl+"/contents?q={isLast:$isLast,chunkSize:"+lastChunk.length+",isFirst:$isLast}"
        println "Opening connection for upload to $url"
        int statusCode

        setProxyParams()
        setSSLParams()

        URL newUrl
        newUrl=new URL(url)

        connection = (HttpURLConnection) newUrl.openConnection()

        connection.setDoOutput(true)
        connection.setDoInput(true)
        connection.setUseCaches(false)
        connection.setRequestMethod("POST")

        connection.setRequestProperty("Content-Type","application/octet-stream")

        String userCredentials = domain +"."+user + ":" + pwd
        String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userCredentials.getBytes())
        connection.setRequestProperty("Authorization", basicAuth)
        DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
        wr.write(lastChunk);
        wr.flush();

        boolean status = false
        int execStatus
        try {
            execStatus = connection.getResponseCode();
            InputStream is = connection.getInputStream();
            BufferedReader br = new BufferedReader(new InputStreamReader(is));
            StringBuilder sb = new StringBuilder();
            String line;
            while ((line = br.readLine()) != null) {
                sb.append(line+"\n");
            }
            br.close();
            String stat = sb.toString();
            if (null == stat || stat.isEmpty()) {
                return status;
            } else {
                if (200 == execStatus) {
                    println stat
                }
            }

        } catch (Exception e) {
            println "Exception occurred while uploading file";
            println e.getMessage()
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }
    }

    // Helper functions

     def unZip(fileName, destinationFolder) {
        // code from http://www.oracle.com/technetwork/articles/java/compress-1565076.html
         println ("**Unzipping "+fileName+"**")
         def fileList=[]
        int BUFFER = 2048;
        try {
            BufferedOutputStream dest = null;
            BufferedInputStream is = null;
            ZipEntry entry;
            ZipFile zipfile = new ZipFile(fileName);
            Enumeration e = zipfile.entries();
            while(e.hasMoreElements()) {
               entry = (ZipEntry) e.nextElement();
               //println("Extracting: " +entry);
               is = new BufferedInputStream(zipfile.getInputStream(entry));
               int count;
               byte[] data;
               data = new byte[BUFFER];
               FileOutputStream fos = new FileOutputStream(destinationFolder+"/"+entry.getName());
               fileList.push(entry.getName())
               dest = new BufferedOutputStream(fos, BUFFER);
               while ((count = is.read(data, 0, BUFFER)) != -1) {
                  dest.write(data, 0, count);
               }
               dest.flush();
               dest.close();
               is.close();
            }
         } catch (FileNotFoundException fnfe) {
             println "Make sure there is not folder in the zip . Zip not processed"
             //fnfe.printStackTrace();
         } catch(Exception e) {
             println "An error occurred while unzipping."
             println e.getMessage()

         }
         return fileList
         println "****"
    }

     def findNbOfColsInCSV(filePath, fileName, delimiter) {
        File csvFile=new File (filePath+"/"+fileName);
        Scanner scanner = new Scanner(csvFile);
        scanner.useDelimiter(delimiter);


        def nbCols
        nbCols=0
        if (scanner.hasNextLine()) {
            String[] vals = scanner.nextLine().split(delimiter);
            nbCols=vals.size()
        }
        scanner.close();

        return nbCols
        }


     def getHeadersInCSVAsList(filePath, fileName, delimiter) {
         String[] headers =[]

         BufferedReader br = new BufferedReader(new FileReader(filePath+"/"+fileName));
         String firstLine = br .readLine();
         println "First line is : " + firstLine
         println "Removing all non ascii chars from line"
         firstLine = firstLine.replaceAll("[^ -~]", "");
         firstLine = firstLine.replaceAll(" ", "");
        // firstLine = firstLine.replaceAll("\"", "");
         headers = firstLine.split(delimiter);

         def headersList = headers as List
         headersList = headersList.collect { it.trim() }

         return headersList
         }

    def exportJobConsole(appName, jobName, jobType, fileName, nDays) {
        println "**Exporting Job Console**"
        def restUrl="$planningUrl/applications/$appName/jobs/"
        def exportConsolePayload = new StringBuilder()
        if (jobName || fileName || nDays || jobType){
            exportConsolePayload << """{"jobType":"JOBCONSOLE_EXPORT","""
            exportConsolePayload << """"parameters":{"""

            def paramList=[]
            if (jobName){
                def jobNames=[]
                jobName.split(",").each{
                    jobNames << it.trim()
                }
                paramList << """"jobNames":"${jobNames.join(",")}" """
            }
            if (jobType){
                def jobTypes=[]
                jobType.split(",").each{
                    jobTypes << it.trim()
                }
                paramList << """"jobTypes":"${jobTypes.join(",")}" """
            }
            if (fileName) paramList << """"fileName":"$fileName" """
            if (nDays) paramList << """"ndays":"$nDays" """

            exportConsolePayload <<  """${paramList.join(",")}}}"""

       } else{
           exportConsolePayload << """{"jobType":"JOBCONSOLE_EXPORT"}"""
       }
       def response
       println "Payload is $exportConsolePayload"
       response=openConnectionWithPayload(restUrl,"POST",exportConsolePayload as String,"application/json")
       println "****"
       return response
    }

    def openConnectionWithPayload(restUrl,method,payload, contentType) {
        println "Opening connection to $restUrl with method:$method"
        int statusCode

        setProxyParams()
        setSSLParams()

        URL newUrl
        newUrl=new URL(restUrl)

        connection = (HttpURLConnection) newUrl.openConnection()

        connection.setDoOutput(true)
        connection.setDoInput(true)
        connection.setUseCaches(false)
        if (method)
            connection.setRequestMethod(method)
        else
            connection.setRequestMethod("GET")

        connection.setRequestProperty("Content-Type",contentType)

        String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary("$domain.$user:$pwd".bytes)
        connection.setRequestProperty("Authorization", basicAuth)

        if (payload){
            connection.getOutputStream().write(payload.getBytes("UTF-8"))
           }
        String response=""
        try {
            statusCode = connection.responseCode
            println "Connection status code: $statusCode "
            if (statusCode==401) {
                println "Not authorized"
            }
            if (statusCode==200) {
                println "Authentication succeeded"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,localFileNameWithPathForStorage)
                println "-----"
            }
            if (statusCode==400) {
                println "Bad request"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,"")
                println "-----"
            }
            if (statusCode==201){
                println "Authentication succeeded"
                println "Server response:"
                println "-----"
                InputStream is=connection.getInputStream()
                BufferedReader br = new BufferedReader(new InputStreamReader(is))
                def object = new JsonSlurper().parseText(br.readLines())
                br.close()
                println "Processing job"
                println "Checking status"
                response=object.jobId
                println "-----"
            }
        } catch (Exception e) {
            println "Error connecting to the URL"
            println e.getMessage()
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }

        return response
    }
}

class WaitForCode {

static retry( sleepTime, nbOfRetries, Closure logicToRun){
  Throwable catched = null
  for(int i=0; i<nbOfRetries; i++){
      try {
          return logicToRun.call()
      } catch(Throwable t){
          catched = t
          println ("Retrying...")
          Thread.sleep(sleepTime)
      }
  }
  println ("Retry count limit exceeded. Stopping check.")
  throw catched
}
}

// Function for reading a file from Zip file
def readZipFile(zipFileName,fullPathtoFileInZip){
    entry = new ZipFile(zipFileName)
    fileContents = entry.getInputStream(entry.getEntry(fullPathtoFileInZip)).text
    entry.close()
    return fileContents
}
// Function for extracting a file from a Zip file
def extractZipFile(zipFileName,fullPathtoFileInZip,localFolderName){
    println ("**Unzipping $fullPathtoFileInZip from $zipFileName to $localFolderName**")
    entry = new ZipFile(zipFileName)
    def fileToExtract = entry.getEntry(fullPathtoFileInZip)
    def fOut = new File("$localFolderName\\${fileToExtract.name}")
    def fos = new FileOutputStream(fOut)

    int BUFFER = 2048
    is = new BufferedInputStream(entry.getInputStream(fileToExtract))
    int count
    byte[] data
    data = new byte[BUFFER]
    dest = new BufferedOutputStream(fos, BUFFER)
    while ((count = is.read(data, 0, BUFFER)) != -1) {
      dest.write(data, 0, count)
    }
    dest.flush()
    dest.close()
    is.close()
    fos.close()
    entry.close()

    println "File extracted"
    println "-----"
    println "****"
}

// derby bulk insert file
def derbyBulkInsert (Sql sql, String tableName, String fullFileName, String columnDelimiter) {
    println """**Loading Derby database table $tableName with data from $fullFileName and $columnDelimiter as delimiter **"""
    println "-----"
	// runs in replace mode since the last argument is a non-zero
    sql.execute("CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE(null,'" + tableName.toUpperCase() + "','" + fullFileName + "','" + columnDelimiter + "',null,null,1)");
    println "Load completed"
    println "-----"
    println "****"
}

// create report from JobConsole table
def createJobConsoleReport(Sql sql, colNames, filter, reportFile, delimiter){
    println "**Create Job Console Report**"
    def reqCols=[]
    colNames.split(",").each{
     reqCols << """"$it" """ as String
    }
    writeReportFile = new File(reportFile)
    if (filter) {
    // Print column information into the file
    def rows = sql.rows("""SELECT ''as "Parent Job", ${reqCols.join(",")} From JOBCONSOLE WHERE $filter FETCH FIRST ROW ONLY""" as String)
        if (rows){
            writeReportFile.write(rows[0].keySet().join(delimiter)) // keySet gives us the column names

             // get job id first
             sql.eachRow("""SELECT "Job Id","Job Name" From JOBCONSOLE WHERE $filter""" as String){ row ->
                 sql.eachRow("""SELECT '' as "Parent Job", ${reqCols.join(",")} From JOBCONSOLE WHERE "Job Id"='${row["Job Id"]}'
                 UNION ALL
                 SELECT '${row["Job Name"]}' as "Parent Job", ${reqCols.join(",")} From JOBCONSOLE WHERE "Parent Id"='${row["Job Id"]}'
                 """ as String){result->
                      meta = result.getMetaData()
                      cols = meta.columnCount
                      vals = (0..<cols).collect{result[it]}
                      writeReportFile.append(System.getProperty("line.separator") + vals.join(delimiter))
                    }
                 }
        }
    } else {
    // Print column information into the file
    def rows = sql.rows("""SELECT ${reqCols.join(",")} From JOBCONSOLE FETCH FIRST ROW ONLY""" as String)
        if (rows){
            writeReportFile.write(rows[0].keySet().join(delimiter))
            sql.eachRow("""SELECT ${reqCols.join(",")} From JOBCONSOLE""" as String){result->
                  meta = result.getMetaData()
                  cols = meta.columnCount
                  vals = (0..<cols).collect{result[it]}
                  writeReportFile.append(System.getProperty("line.separator") + vals.join(delimiter))
                }
        }
    }
    println "Report created"
    println "-----"
    println "****"
}

// Check whether JobConsole Table exists
def createDerbyTable(Sql sql,sqlCols, tableName){
    def metadata = sql.connection.getMetaData()
    def jobConsoletable = metadata.getTables(null, null, tableName, null)
      if (!jobConsoletable.next()) {
          sql.execute """CREATE TABLE APP.$tableName ( ${sqlCols.join(" ,")}) """ as String
      }
}


// Read configuration script for the environment info.
def keyConfig = new ConfigSlurper().parse(new File('D:\\groovy_scripts\\Env_Info.config').toURL())

epbcsURL=keyConfig.serverEnvs.test.epbcsURL
epbcsUserName=keyConfig.serverEnvs.test.epbcsUserName
epbcsPassword=new String(keyConfig.serverEnvs.test.epbcsPassword.decodeBase64())
epbcsDomain=keyConfig.serverEnvs.test.epbcsDomain
apiVersion=keyConfig.serverEnvs.test.apiVersion
lcmVersion=keyConfig.serverEnvs.test.lcmVersion

PbcsRestClient pbcsClient=new PbcsRestClient("$epbcsURL/HyperionPlanning/rest/$apiVersion","$epbcsURL/interop/rest/$lcmVersion","","",epbcsDomain,epbcsUserName, epbcsPassword, "TRUE")
//def exportJobId=pbcsClient.exportJobConsole("applicationName", "", "RuleName", "JobConsoleExport.csv", "7")
def exportJobId=pbcsClient.exportJobConsole("applicationName", "", "", "JobConsoleExport.csv", "2")
// keep checking the status still it is complete
def jobStatus="Processing"
while (jobStatus=="Processing"){
    def response = pbcsClient.getJobStatus("applicationName",exportJobId)
    def object = new JsonSlurper().parseText(response)
    jobStatus=object.descriptiveStatus
    sleep 10*1000 //sleep for 10 seconds
}
println "Export Console Job $jobStatus"
if (jobStatus == "Completed"){
    pbcsClient.downloadFile("JobConsoleExport.zip", "D:\\groovy_scripts")
    //println readZipFile("D:\\groovy_scripts\\JobConsoleExport.zip","JobConsoleExport.csv")

    extractZipFile("D:\\groovy_scripts\\JobConsoleExport.zip","JobConsoleExport.csv","D:\\groovy_scripts")

    // create an in memory instance of derby database
    def sql = Sql.newInstance(
        url:'jdbc:derby:memory:epbcsDB;create=true',
        driver:'org.apache.derby.jdbc.EmbeddedDriver'
    )

    // read the first line of the csv file to create columns in DERBY Table
    def inputStream = new BOMInputStream(new FileInputStream("D:\\groovy_scripts\\JobConsoleExport.csv"))
    BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream))
    sqlCols=[]
    reader.readLine().split(",").each{
        sqlCols << "$it VARCHAR(2500)"
    }
    reader.close()
    inputStream.close()

    try{
        createDerbyTable(sql, sqlCols, "JOBCONSOLE")
        // insert file contents to the table
        derbyBulkInsert(sql, "JOBCONSOLE","D:\\groovy_scripts\\JobConsoleExport.csv",",")
        sql.execute('CREATE INDEX IDX_Console ON JOBCONSOLE("Job Id","Job Name","User Name")')
        // create report
        createJobConsoleReport(sql, "Job Name,Details,User Name,Run Status,Start Time,End Time", """"Job Name"='RuleName'""", "D:\\groovy_scripts\\ConsoleReport.txt", "|")

    } finally {
        sql.close()
    }
} else {
    println "Export process failed"
}

Yup, it is a considerably large script 😮

On line 776, I’m using ConfigSlurper to read my config file (you might have an idea what this is now, if not, please read my other groovy posts).

Lines 778 to 783 I’m populating some variables based on my config file.

785, I’m creating the PbcsRestClient, and on 787, I’m running the export process. You’ll be able to use line 786 after 19.12 (I hope so).

Lines 789 to 795, I’m checking whether the process finished or not. I’ve seen that some processes might take longer to execute. A while loop will make sure that the process gets completed before proceeding to the next step.

If the job gets completed, then the file gets downloaded – line 798.

Line 801 extracts a specific file from the zip file; in this case, the CSV that we exported.

Once the file gets exported, lines 804 to 807 creates an in-memory Apache Derby database.

Now we need to read the file and create the columns needed for Lines 810-817. I’m reading the first line of the CSV, since (E)PBCS file are using Byte Order Marking I’m using Apache commons to get rid of the BOM so that I can create the columns without it.

All the columns get created as VARCHAR, if you wish to change them, then you’ll have to create a prepared statement (convert the columns to format) to perform the file import. SYSCS_IMPORT_TABLE does not have an option to specify a specific format. I hope they add support for this.

Line 818, we are trying to create the JOBCONSOLE table if it doesn’t exist. Line 820 is importing the CSV file extracted from Cloud into the JOBCONSOLE table.

Once the data is loaded, create an index – line 821. It is import to create our report.

Since the columns got spaces in them, we need to use double quotes when mentioning them, that is why I’m using a GString to supply the filter parameter – line 823. You can read that as Job Name=’RuleName, if you need to filter on two columns the filter variable will be “”””Job Name=’RuleName’ AND “User Name”=’username'”””

The function which creates the report – createJobConsoleReport (line 719 – 761) uses some interesting Groovy functions.

Lines 722-724 is splitting the comma-separated column names that you asked for into a list and adds the double quotes needed.

Line 726 is using the “Groovy Truth” to find out whether you supplied a filter or not.

Line 728 is extracting one row of information to find out whether there is data using the supplied filter.

If there is data – line 729, it then queries the Job Id and Job Name with the given filter. Now for each row; the ones with a job id, we run a select to get the current job ids and another one to get the child jobs – lines 736-737

Lines 738-741 is where we are collecting the information to create our report. First, we count the columns present in ResultSetMetaData, and then we run a collect to get all column values using line 740.

That is a powerful way to get information out. (Considering that it is a one-line code to get all that!)

That’s all. Easy Peasy 🙂

Here is the expected output with the parameters and their supplied values.

Leave a comment

Your email address will not be published. Required fields are marked *