When I first started with OneStream data load, it did feel different, in a way I did like that I don’t have to deal with load rules :). However, it did have the similarities to a load rule too.
If you want to load data into OneStream, you need to have a data source. You can load data from the following sources.

You can even create an export sequence and use that as a source!!!
Connectors are the ones that you can use to integrate directly to multiple external systems. We are going to use delimited files for this blog post.
Let’s create a data source, shall we?
Keep in mind that, to load data to a OneStream cube; you need to have a cube! Surprise, surprise (similar to Essbase database, planning plan type).
Cube creation
In a cube, you can define which dimensions are part of the cube and in the Integration tab, which ones are not going to take part in data loads. (that was easy, wasn’t it?)

In “Sample” cube I’m only using 1 User-defined dimension. UD1 – Products.
Once that is set, you can enable/disable dimensions in the Integration tab. This is also the place where you’ll enable attributes (text and value). Don’t confuse them with Essbase/Planning attributes (they are similar, yet a lot different)


I’ve disabled the following.
- IC
- UD2-UD8
- Text Value
I’ve enabled the following
- Attribute Text 1- 3
- Attribute Values 1 – 2
Now that we created the cube, it’s time to create that data source.
Data source creation

You can create a data source and associated that with a cube (similar to Essbase load rules (database load rules). OneStream is always about adding flexibility :), so if you want that data source to only show for certain scenario types you can assign a type here. In the case above this load rule, can only be used to load Actual data only.
By default, a data source is created similar to the Essbase rule with Data enabled. (as you may have noticed, you can only have one Data column in such load rules).

What if you got multiple columns of data? I might have a file with all periods coming in, or all accounts coming in.

For such cases, you can change the data structure of the data source to Matrix data. (It’s like a matrix multiple columns of data).
Similar to assigning the fields, you can assign the dimensions to a column by selecting a column and use “Apply Selection” or typing in the column number.

What if I don’t have a column for a dimension in my file. In my case, there is no Flow dimension, so I’m going to assign a static value to my Flow dimension.

My source file got data for multiple measures, how to assign those fields now?

You can add and remove source dimensions in the data source, the trick here is to add a source dimension (in this case Measures) and change the data type to “Matrix Text.”
What about the header Celvin, how do we ignore that one? This is where a source dimension called “Bypass” comes in handy.

You can then bypass a column(s)/line(s)/position(s).

You must remember not to leave “Amount” column unassigned.

In this case, I assigned it to the Sales column again.
Load data
That completes the data source; let’s load the data. To load the data (similar to EAS -> Load data-> Select rule and Select file), you need to have an import workflow profile.

Select the data source and transformation profile for the scenario type (or you can set it on “Default” and use the default settings.
Navigate to OnePlace -> Workflow

Use Import profile->Load and Transform (if this is a file-based source, then you must upload the file).
We got data from the file.

Run the validations.

Time to load, as you can see OS did an unpivot operation (Measures from columns went to Rows) for all the Matrix columns and made it a single data column.

Matrix load completed, and I can see all the data in my very basic sample application 😉
hi, thank you. How do we automate the process? Can we leverage the BR? If so, could you please share your POV.