Extract Planning dimensions in a level format


I sometimes try to answer questions on Cloud Customer Connect and found a question that was posted there really interesting.

Joe wanted to know whether PBCS dimensions can be exported in a level format. I thought about that for a while and came up with two options.

  1. Export the outline, create an On-Premises cube and use OutlineExtractor
  2. Use groovy to generate the file for you.

The first option is going to be only feasible if you have an On-Premises Essbase server, so that idea got rejected pretty fast.

I quickly looked at Vision application and picked Product to demonstrate what the extract will look like.

P_000 is a level 0 member, P_TP is a Level 1 and a Level 2 member. It is level 1 for P_000 and level 2 for P_100.

Let’s use OlapUnderground Outline Extractor and see how the file looks like.

There you go a level based dimension extract from Planning. Now how can we do this on Cloud?

Groovy to the rescue

The idea is to extract the (all) dimensions using LCM, download them, and generate the file.

I’m making use of the Apache Derby database, to then load the dimensions and perform level based dimension extract.

This groovy script uses PBCSClient, which is published by the CEAL team. I’ve added some functions to it.

This code uses a ConfigSlurper which reads all the required information to connect to PBCS.

// Read configuration script for the environment info.
def keyConfig = new ConfigSlurper().parse(new File('D:/groovy_scripts/Env_Info.config').toURL())
epbcsURL=keyConfig.serverEnvs.pbcs.epbcsURL
epbcsUserName=keyConfig.serverEnvs.pbcs.epbcsUserName
epbcsPassword=new String(keyConfig.serverEnvs.pbcs.epbcsPassword.decodeBase64())
epbcsDomain=keyConfig.serverEnvs.pbcs.epbcsDomain
appName=keyConfig.serverEnvs.pbcs.appName
apiVersion=keyConfig.serverEnvs.pbcs.apiVersion
lcmVersion=keyConfig.serverEnvs.pbcs.lcmVersion
PbcsRestClient pbcsClient=new PbcsRestClient("$epbcsURL/HyperionPlanning/rest/$apiVersion","$epbcsURL/interop/rest/$lcmVersion","","",epbcsDomain,epbcsUserName, epbcsPassword,appName,"TRUE")

Once you are connected, I’m executing an existing LCM which exports all the dimensions.

// run LCM export
def lcmJobResponse=pbcsClient.executeLCMExport("Export_Dimensions")
def lcmObject = new JsonSlurper().parseText(lcmJobResponse)

Once the export is triggered we are going to check the status of the LCM export by getting the link with the job id.

// get the status of the lcm
// we cannot use job id to check the status as this is migration job
// only one link will have job status link, hence .find
def lcmJobLink = lcmObject.links.find{it["rel"] == "Job Status"}["href"]

I’m then checking the status of the job repeatedly (after every 10 seconds) for completion. The jobStatus -1 denotes that the job is processing. The jobStatus of 0 is a successful completion. The while loop will exist once the jobStatus is not “Processing”.

// keep checking the status of dimension LCM file still it is complete
def jobStatus=-1
while (jobStatus == -1){
   def object = new JsonSlurper().parseText(pbcsClient.getLCMStatus(lcmJobLink))
   jobStatus=object.status
   sleep 10*1000 //sleep for 10 seconds
}

Once the LCM export is done, I’m downloading the LCM file and extracting the LCM to a folder. A derby database is created in memory and then the dimensions are loaded to their own tables.

if (jobStatus == 0){
	pbcsClient.downloadFile("Export_Dimensions", "D:/groovy_scripts")
	pbcsClient.unZipWithFolders("D:/groovy_scripts/Export_Dimensions","D:/groovy_scripts/Dimension Extract","true")
	
    // url:'jdbc:derby:memory:LevelDimExtract;create=true',
    // url:'jdbc:derby:D:/groovy_scripts/LevelDimExtract;create=true',
    // create an in memory instance of derby database
    def sql = Sql.newInstance(
        url:'jdbc:derby:memory:LevelDimExtract;create=true',
        driver:'org.apache.derby.jdbc.EmbeddedDriver'
    )
    generateLevelReport(sql,"D:/groovy_scripts/Dimension Extract/HP-$appName", "D:/groovy_scripts/DimExtract/$appName", "," ,"LevelDimExtract")
} else {
    println "Export process failed"
}

First, we are going to find out what dimensions got extracted using the LCM listing XML file, along with the path to the CSV file.

            // read the listing file for all the dimensions from the listing  xml
            plnDIMXML = new XmlParser().parse(new File("$lcmFolder/info/listing.xml").toURI().toString())
            // find the names of the dimensions and their paths and add to a map
            allDims=plnDIMXML.resource.findAll{it.@path.matches("(.*)Dimensions(.*)")}.collectEntries{[it.@name,it.@path]}

Each dimension file is then read to find out the dimension extract information. The CSV information starts after the line contains a specific pattern.

def dimLine=0
                    def inputStream = new FileInputStream(dimPath)
                    def reader = new BufferedReader(new InputStreamReader(inputStream))
                    // All dimension files got XML code in it.
                    // The csv information starts after a specific pattern
                    // We are checking what line the pattern is in
                    new File(dimPath).eachLine(){line, lineNumber ->
                        if(line.contains("#--!")) {
                            dimLine=lineNumber
                        }
                    }

I’m then creating a table with 3 fields – Child, Parent, DataStorage.

// Only going to use three columns for this (Child, Parent, and Storage)
                    ['CHILD','PARENT','DATASTORAGE'].each{
                       sqlCols << """"${it}" VARCHAR(2500)"""
                    }
                    // Create the table if it doesn't exist
                    // If it does then truncate the table
                    createDerbyTable(sql, sqlCols, dimName)

I’m then reading everything from the Header to the end of the file. Using the first line (line 21), I’m trying to find the position of “Data Storage” and store it as a variable “storIndex”.

From each line Child, Parent, and Data Storage field is added to a prepared statement (think as a bulk insert) to improve the processing speed.

                    // Using SQL Perpared Statment to load data in bulk
                    def insertDIMStmt = """ INSERT INTO "${dimName}" ("CHILD","PARENT","DATASTORAGE") values (?,?,?)""" as String
                    // get the index of Data storage field
                    def storIndex = 0
                    // read each line from the CSV start line
                    // Planning adds formula to multiple lines, the trick I've done is
                    // To check for " in the the lines, if it is got a ", then don't use that line
                    //
                    reader.eachLine(){line,lineNumber ->
                      if (lineNumber > dimLine+1) {
                          def mbrLines=line.split(",")
                          if (mbrLines.size()>=3) {
                            if (!(mbrLines[0].matches("(.*)\"(.*)") || mbrLines[1].matches("(.*)\"(.*)") || mbrLines[storIndex].matches("(.*)\"(.*)"))){
                                sql.withBatch(500, insertDIMStmt) { stmt ->
                                    stmt.addBatch(mbrLines[0].trim(), mbrLines[1].trim(), mbrLines[storIndex].trim())
                                }
                            }
                          }
                      } else if (lineNumber = dimLine + 1){
                        storIndex = line.split(",").findIndexOf { it.trim() == "Data Storage" }
                      }
                    }

Now the dimension information is loaded to the table, it is time to find out the level information.

This is done by selecting all level 0s along with their parent and storing that as a separate table (lines 4-25). If there are no level 0 members, which is not going to be the case, unless you got a dimension with no members, then while loop will exit. (lines 8-9)

def x = true
                    def i = 0
                    while(x) {
                        if (i==0){
                            def mbrRows = sql.rows("""select distinct x.CHILD,x.PARENT from "$dimName" x left join "$dimName" y on x.CHILD = y.PARENT where y.PARENT is null AND x.DATASTORAGE <> 'shared'
                                        union all
                                        select distinct CHILD,PARENT from "$dimName" WHERE DATASTORAGE ='shared'""" as String)
                            if (mbrRows.size() == 0) {
                                x=false
                            } else {
                                def cols = []
                                // Only going to use two columns for this (Child and Parent)
                                ['CHILD','PARENT'].each{
                                    cols << """"${it}" VARCHAR(2500)"""
                                }
                                // Create the table if it doesn't exist
                                // If it does then truncate the table
                                createDerbyTable(sql, cols, "LEV0${dimName}")
                                def insertStmt = """INSERT INTO "LEV0${dimName}" ("CHILD","PARENT") values (?,?)""" as String
                                mbrRows.each{
                                    sql.withBatch(500, insertStmt) { stmt ->
                                        stmt.addBatch(it['CHILD'], it['PARENT'])
                                    }
                                }
                            }
                            i+=1
                        } else {
                            def levMbrs = sql.rows("""select distinct CHILD,PARENT FROM "$dimName" WHERE CHILD IN (SELECT DISTINCT PARENT FROM "LEV${i-1}$dimName") AND DATASTORAGE <> 'shared'""" as String)
                            if (levMbrs.size() == 0) {
                                x=false
                            } else {
                                def cols = []
                                // Only going to use two columns for this (Child and Parent)
                                ['CHILD','PARENT'].each{
                                    cols << """"${it}" VARCHAR(2500)"""
                                }
                                // Create the table if it doesn't exist
                                // If it does then truncate the table
                                createDerbyTable(sql, cols, "LEV${i}${dimName}")
                                def insertStmt = """INSERT INTO "LEV${i}${dimName}" ("CHILD","PARENT") values (?,?)""" as String
                                levMbrs.each{
                                    sql.withBatch(500, insertStmt) { stmt ->
                                        stmt.addBatch(it['CHILD'], it['PARENT'])
                                    }
                                }
                            }
                            i+=1
                        }
                    }

All the level 0 members are stored in a table called LEV0<dimName>, the variable i is incremented so that I can process the rest of the levels.

The rest of the levels gets the child and parent information by querying the children that are parents at the previous level. Every level gets stored in its table. (28-47)

Now that all the level information is stored, it time to get the extract.

I’m preparing the SQL statement needed for each dimension and the levels present with the trick below.

def levExtractSelectStmt="LEV0${dimName}.CHILD AS Level0, LEV0${dimName}.PARENT AS Level1"
                    def levExtractJoinStmt=""
                    if (i > 2) {
                        (1..i-2).each{
                            levExtractSelectStmt=levExtractSelectStmt + ", LEV${it}${dimName}.PARENT AS Level${it+1}"
                            levExtractJoinStmt=levExtractJoinStmt + " LEFT JOIN LEV${it}${dimName} ON LEV${it-1}${dimName}.PARENT=LEV${it}${dimName}.CHILD"
                        }
                    }

The Product dimension with three levels produces the following SQL statement.

SELECT LEV0PRODUCT.CHILD AS Level0, LEV0PRODUCT.PARENT AS Level1, LEV1PRODUCT.PARENT AS Level1, LEV2PRODUCT.PARENT AS Level2 FROM LEV0PRODUCT  LEFT JOIN LEV1PRODUCT ON LEV0PRODUCT.PARENT=LEV1PRODUCT.CHILD LEFT JOIN LEV2PRODUCT ON LEV1PRODUCT.PARENT=LEV2PRODUCT.CHILD

Version with two levels produces the following SQL. (won’t go to the if condition)

SELECT LEV0VERSION.CHILD AS Level0, LEV0VERSION.PARENT AS Level1 FROM LEV0VERSION

Now that the SQL is prepared, it is time to execute and extract it.

def writeExtractFile = new File("$reportPath/${reportName}_${dimName}.txt")
                    def levelRows = sql.rows("SELECT $levExtractSelectStmt FROM LEV0${dimName} $levExtractJoinStmt" as String)
                    writeExtractFile.write(levelRows[0].keySet().join(delimiter))
                    levelRows.each{result->
                      cols = result.size()
                      vals = (0..<cols).collect{(result[it] ? result[it].trim() : "")}
                          writeExtractFile.append(System.getProperty("line.separator") + vals.join(delimiter))
                      }

Each dimension will be written to its own file (specified by the reportFolder, and reportName). levelRows’s (GroovyRowResult) keySet (think of this as a map) will provide us with all the columns that are present and this is the header of each file (line3).

Each row is written to the file using a collect statement (line6). A ternary operator is used to check whether the result is null. If it is null, then a blank string is written to the file.

That’s it, here is the level based extract of Product using the code below.

Full Code

package com.oracle.ceal
@Grab('org.apache.derby:derby:10.9.1.0')
@Grab('commons-io:commons-io:2.6')
@GrabConfig(systemClassLoader=true)
import javax.net.ssl.HostnameVerifier
import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLContext
import javax.net.ssl.TrustManager
import javax.net.ssl.X509TrustManager
import java.net.HttpURLConnection
import java.util.zip.ZipFile
import static java.sql.ResultSet.*
import java.nio.file.Files
import java.nio.file.Paths
import groovy.json.*
import groovy.sql.Sql
import groovy.io.FileType
class PbcsRestClient {
    private HttpURLConnection connection
    private def planningUrl
    private def interopUrl
    private def proxyHost
    private def proxyPort
    private def user
    private def pwd
    private def appName
    private def domain
    private def ignoreSSLCertsErrors
    public PbcsRestClient(planningServerUrl, interopServerUrl,httpProxyHost, httpProxyPort, identityDomain,username, password, applicationName, ignoreSSLCertificationPathErrors) {
        planningUrl=planningServerUrl
        interopUrl=interopServerUrl
        proxyHost=httpProxyHost
        proxyPort=httpProxyPort
        domain=identityDomain
        user=username
        pwd=password
        appName=applicationName
        ignoreSSLCertsErrors=ignoreSSLCertificationPathErrors
    }
    def setProxyParams() {
        Properties systemProperties = System.getProperties()
        systemProperties.setProperty("http.proxyHost",proxyHost)
        systemProperties.setProperty("http.proxyPort",proxyPort)
        systemProperties.setProperty("https.proxyHost",proxyHost)
        systemProperties.setProperty("https.proxyPort",proxyPort)
    }
    def setSSLParams() {
        if (ignoreSSLCertsErrors !=null && ignoreSSLCertsErrors.toUpperCase()=="TRUE") {
            println "Ignoring SSL certification path errors"
            // Disable SSL cert validation
            def hostnameVerifier = [
                verify: { hostname, session -> true }
            ]
            def trustManager = [
                    checkServerTrusted: { chain, authType -> },
                    checkClientTrusted: { chain, authType -> },
                    getAcceptedIssuers: { null }
            ]
            HttpsURLConnection.setDefaultHostnameVerifier(hostnameVerifier as HostnameVerifier)
            SSLContext context = SSLContext.getInstance("SSL")
            context.init(null, [trustManager as X509TrustManager] as TrustManager[], null)
            HttpsURLConnection.setDefaultSSLSocketFactory(context.getSocketFactory())

        }
    }
    def openConnection(restUrl,method,localFileNameWithPathForStorage) {
        println "Opening connection to $restUrl with method:$method"
        int statusCode
        setProxyParams()
        setSSLParams()
        URL newUrl
        newUrl=new URL(restUrl)
        connection = (HttpURLConnection) newUrl.openConnection()
        connection.setDoOutput(true)
        connection.setDoInput(true)
        connection.setUseCaches(false)
        if (method=="")
            connection.setRequestMethod("GET")
        else
            connection.setRequestMethod(method)
        connection.setRequestProperty("Content-Type","application/x-www-form-urlencoded")
        String basicAuth = "Basic ${javax.xml.bind.DatatypeConverter.printBase64Binary("$domain.$user:$pwd".bytes)}"
        connection.setRequestProperty("Authorization", basicAuth)
        String response=""
        try {
            statusCode = connection.responseCode
            println "Connection status code: $statusCode "
            if (statusCode==401) {
                println "Not authorized"
            }
            if (statusCode==200) {
                println "Authentication succeeded"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,localFileNameWithPathForStorage)
                println "-----"
            }
            if (statusCode==400) {
                println "Bad request"
                println "Server response:"
                println "-----"
                response=displayServerResponse(connection,"")
                println "-----"
            }
        } catch (Exception e) {
            println "Error connecting to the URL"
            println e.getMessage()
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }
        return response
    }
    def displayServerResponse(connection,localFileNameWithPathForStorage) {
        InputStream is;
        if (connection.getResponseCode()==200) {
            is=connection.getInputStream();
        } else {
            is=connection.getErrorStream();
        }
        println "Response Content-Type:${connection.getContentType()}"
        if (connection.getContentType().contains("application/json")) {
            BufferedReader br = new BufferedReader(new InputStreamReader(is));
            StringBuilder sb = new StringBuilder();
            String line;
            while ((line = br.readLine()) != null) {
                sb.append(line+"\n");
            }
            br.close();
            println sb
            return sb.toString()
        } else {
        if (connection.getResponseCode()==200) {
        //storing content
            final int BUFFER_SIZE = 5 * 1024 * 1024;
            def fileExt = connection.getHeaderField("fileExtension");
            println "Downloading file with fileExtension header:$fileExt"
            if (fileExt!=null) {
                def saveFilePath = localFileNameWithPathForStorage;
                File f = new File(saveFilePath);
                is = connection.getInputStream();
                FileOutputStream outputStream = new FileOutputStream(f);
                int bytesRead = -1;
                byte[] buffer = new byte[BUFFER_SIZE];
                while ((bytesRead = is.read(buffer)) != -1) {
                    outputStream.write(buffer, 0, bytesRead);
                }
                println "Downloaded file to $localFileNameWithPathForStorage"
                outputStream.close()
                return localFileNameWithPathForStorage
            } else {
                println "Could not find fileExtension header"
            }
        }
        }
        is.close()
        return ""
    }
    def downloadFile(serverFileName,localFolderForStorage) {
        println "**Downloading file**"
        def restUrl="$interopUrl/applicationsnapshots/$serverFileName/contents"
        def response
        response=openConnection(restUrl,"GET","$localFolderForStorage/$serverFileName")
        println "****"
    }
	
    def executeLCMExport(snapshotName) {
        println "**Exporting snapshot**"
        def typeExport="{type:export}"
        def restUrl="$interopUrl/applicationsnapshots/$snapshotName/migration?q=$typeExport"
        def response
        response=openConnection(restUrl,"POST","")
        println "****"
        return response
    }
    def getLCMStatus(restUrl) {
        println "**Getting snapshot status**"
        def response
        response=openConnection(restUrl,"GET","")
        println "****"
        return response
    }
    // Helper functions
    def unZipWithFolders(fileName, destinationFolder,overWrite){
        def ant = new AntBuilder()
        def ovrWrite
        if (overWrite) {
            ovrWrite=overWrite
        } else {
            ovrWrite = "false"
        }
        ant.unzip( src:fileName,
                    dest:destinationFolder,
                    overwrite:ovrWrite )
    }
}
    // Check whether table exists, if not create it
    def createDerbyTable(Sql sql,sqlCols, tableName){
        def metadata = sql.connection.getMetaData()
        def table = metadata.getTables(null, null, tableName.toUpperCase(), null)
        if (!table.next()) {
             sql.execute """CREATE TABLE APP."${tableName.toUpperCase()}" ( ${sqlCols.join(" ,")}) """ as String
        } else {
             sql.execute """TRUNCATE TABLE APP."${tableName.toUpperCase()}" """ as String
        }
    }
    // derby bulk insert file
    def derbyBulkInsert (Sql sql, String tableName, String fullFileName, String columnDelimiter) {
        println """**Loading Derby database table $tableName with data from $fullFileName and $columnDelimiter as delimiter **"""
        println "-----"
        // runs in replace mode since the last argument is a non-zero
        sql.execute("CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE(null,'" + tableName.toUpperCase() + "','" + fullFileName + "','" + columnDelimiter + "',null,null,1)");
        println "Load completed"
        println "-----"
        println "****"
    }
    def generateLevelReport(sql,lcmFolder, reportPath, delimiter, reportName) {
        try {
            // read the listing file for all the dimensions from the listing  xml
            plnDIMXML = new XmlParser().parse(new File("$lcmFolder/info/listing.xml").toURI().toString())
            // find the names of the dimensions and their paths and add to a map
            allDims=plnDIMXML.resource.findAll{it.@path.matches("(.*)Dimensions(.*)")}.collectEntries{[it.@name,it.@path]}
            if (allDims){
                allDims.each { dims ->
                    dimName = dims.key.toUpperCase()
                    dimPath = "$lcmFolder/resource/${dims.value}/${dims.key}.csv"
                    sqlCols=[]
                    def dimLine=0
                    def inputStream = new FileInputStream(dimPath)
                    def reader = new BufferedReader(new InputStreamReader(inputStream))
                    // All dimension files got XML code in it.
                    // The csv information starts after a specific pattern
                    // We are checking what line the pattern is in
                    new File(dimPath).eachLine(){line, lineNumber ->
                        if(line.contains("#--!")) {
                            dimLine=lineNumber
                        }
                    }
                    // Only going to use three columns for this (Child, Parent, and Storage)
                    ['CHILD','PARENT','DATASTORAGE'].each{
                       sqlCols << """"${it}" VARCHAR(2500)"""
                    }
                    // Create the table if it doesn't exist
                    // If it does then truncate the table
                    createDerbyTable(sql, sqlCols, dimName)
                    // Using SQL Perpared Statment to load data in bulk
                    def insertDIMStmt = """ INSERT INTO "${dimName}" ("CHILD","PARENT","DATASTORAGE") values (?,?,?)""" as String
                    // get the index of Data storage field
                    def storIndex = 0
                    // read each line from the CSV start line
                    // Planning adds formula to multiple lines, the trick I've done is
                    // To check for " in the the lines, if it is got a ", then don't use that line
                    //
                    reader.eachLine(){line,lineNumber ->
                      if (lineNumber > dimLine+1) {
                          def mbrLines=line.split(",")
                          if (mbrLines.size()>=3) {
                            if (!(mbrLines[0].matches("(.*)\"(.*)") || mbrLines[1].matches("(.*)\"(.*)") || mbrLines[storIndex].matches("(.*)\"(.*)"))){
                                sql.withBatch(500, insertDIMStmt) { stmt ->
                                    stmt.addBatch(mbrLines[0].trim(), mbrLines[1].trim(), mbrLines[storIndex].trim())
                                }
                            }
                          }
                      } else if (lineNumber = dimLine + 1){
                        storIndex = line.split(",").findIndexOf { it.trim() == "Data Storage" }
                      }
                    }
                    reader.close()
                    inputStream.close()
                    // Create an INDEX on Child column
                    sql.execute("""CREATE INDEX "IDX_$dimName" ON "${dimName}" ("CHILD") """ as String)
                    def x = true
                    def i = 0
                    while(x) {
                        if (i==0){
                            def mbrRows = sql.rows("""select distinct x.CHILD,x.PARENT from "$dimName" x left join "$dimName" y on x.CHILD = y.PARENT where y.PARENT is null AND x.DATASTORAGE <> 'shared'
                                        union all
                                        select distinct CHILD,PARENT from "$dimName" WHERE DATASTORAGE ='shared'""" as String)
                            if (mbrRows.size() == 0) {
                                x=false
                            } else {
                                def cols = []
                                // Only going to use two columns for this (Child and Parent)
                                ['CHILD','PARENT'].each{
                                    cols << """"${it}" VARCHAR(2500)"""
                                }
                                // Create the table if it doesn't exist
                                // If it does then truncate the table
                                createDerbyTable(sql, cols, "LEV0${dimName}")
                                def insertStmt = """INSERT INTO "LEV0${dimName}" ("CHILD","PARENT") values (?,?)""" as String
                                mbrRows.each{
                                    sql.withBatch(500, insertStmt) { stmt ->
                                        stmt.addBatch(it['CHILD'], it['PARENT'])
                                    }
                                }
                            }
                            i+=1
                        } else {
                            def levMbrs = sql.rows("""select distinct CHILD,PARENT FROM "$dimName" WHERE CHILD IN (SELECT DISTINCT PARENT FROM "LEV${i-1}$dimName") AND DATASTORAGE <> 'shared'""" as String)
                            if (levMbrs.size() == 0) {
                                x=false
                            } else {
                                def cols = []
                                // Only going to use two columns for this (Child and Parent)
                                ['CHILD','PARENT'].each{
                                    cols << """"${it}" VARCHAR(2500)"""
                                }
                                // Create the table if it doesn't exist
                                // If it does then truncate the table
                                createDerbyTable(sql, cols, "LEV${i}${dimName}")
                                def insertStmt = """INSERT INTO "LEV${i}${dimName}" ("CHILD","PARENT") values (?,?)""" as String
                                levMbrs.each{
                                    sql.withBatch(500, insertStmt) { stmt ->
                                        stmt.addBatch(it['CHILD'], it['PARENT'])
                                    }
                                }
                            }
                            i+=1
                        }
                    }
                    def levExtractSelectStmt="LEV0${dimName}.CHILD AS Level0, LEV0${dimName}.PARENT AS Level1"
                    def levExtractJoinStmt=""
                    if (i > 2) {
                        (1..i-2).each{
                            levExtractSelectStmt=levExtractSelectStmt + ", LEV${it}${dimName}.PARENT AS Level${it+1}"
                            levExtractJoinStmt=levExtractJoinStmt + " LEFT JOIN LEV${it}${dimName} ON LEV${it-1}${dimName}.PARENT=LEV${it}${dimName}.CHILD"
                        }
                    }
                    def writeExtractFile = new File("$reportPath/${reportName}_${dimName}.txt")
                    def levelRows = sql.rows("SELECT $levExtractSelectStmt FROM LEV0${dimName} $levExtractJoinStmt" as String)
                    writeExtractFile.write(levelRows[0].keySet().join(delimiter))
                    levelRows.each{result->
                      cols = result.size()
                      vals = (0..<cols).collect{(result[it] ? result[it].trim() : "")}
                          writeExtractFile.append(System.getProperty("line.separator") + vals.join(delimiter))
                      }
                    println "Level based export of $dimName completed."
                }
            } else {
                println "No dimensions in the LCM"
            }
        } finally {
            sql.close()
        }
    }
// Read configuration script for the environment info.
def keyConfig = new ConfigSlurper().parse(new File('D:/groovy_scripts/Env_Info.config').toURL())
epbcsURL=keyConfig.serverEnvs.pbcs.epbcsURL
epbcsUserName=keyConfig.serverEnvs.pbcs.epbcsUserName
epbcsPassword=new String(keyConfig.serverEnvs.pbcs.epbcsPassword.decodeBase64())
epbcsDomain=keyConfig.serverEnvs.pbcs.epbcsDomain
appName=keyConfig.serverEnvs.pbcs.appName
apiVersion=keyConfig.serverEnvs.pbcs.apiVersion
lcmVersion=keyConfig.serverEnvs.pbcs.lcmVersion
PbcsRestClient pbcsClient=new PbcsRestClient("$epbcsURL/HyperionPlanning/rest/$apiVersion","$epbcsURL/interop/rest/$lcmVersion","","",epbcsDomain,epbcsUserName, epbcsPassword,appName,"TRUE")
// run LCM export
def lcmJobResponse=pbcsClient.executeLCMExport("Export_Dimensions")
def lcmObject = new JsonSlurper().parseText(lcmJobResponse)
// get the status of the lcm
// we cannot use job id to check the status as this is migration job
// only one link will have job status link, hence .find
def lcmJobLink = lcmObject.links.find{it["rel"] == "Job Status"}["href"]
// keep checking the status of dimension LCM file still it is complete
def jobStatus=-1
while (jobStatus == -1){
   def object = new JsonSlurper().parseText(pbcsClient.getLCMStatus(lcmJobLink))
   jobStatus=object.status
   sleep 10*1000 //sleep for 10 seconds
}
println "LCM Export status $jobStatus"
if (jobStatus == 0){
	pbcsClient.downloadFile("Export_Dimensions", "D:/groovy_scripts")
	pbcsClient.unZipWithFolders("D:/groovy_scripts/Export_Dimensions","D:/groovy_scripts/Dimension Extract","true")
	
    // url:'jdbc:derby:memory:LevelDimExtract;create=true',
    // url:'jdbc:derby:D:/groovy_scripts/LevelDimExtract;create=true',
    // create an in memory instance of derby database
    def sql = Sql.newInstance(
        url:'jdbc:derby:memory:LevelDimExtract;create=true',
        driver:'org.apache.derby.jdbc.EmbeddedDriver'
    )
    generateLevelReport(sql,"D:/groovy_scripts/Dimension Extract/HP-$appName", "D:/groovy_scripts/DimExtract/$appName", "," ,"LevelDimExtract")
} else {
    println "Export process failed"
}

Leave a comment

Your email address will not be published. Required fields are marked *