The more data from source systems imported into Cube, the more useful it becomes. Cube's flexibility allows for data of various sorts to be imported. All source data is tied to what is known as a data table within Cube. Data tables have main data constructs associated with them:
- Dimensions: These define the structure of the data from the source system. As part of the data import process, dimensions will be mapped to your dimensions within Cube itself to determine where that data lives.
- Transactions: The transactions represent the raw data from the source system. For General Ledgers/ERPs, these are typically journal entries/line items, but they could also be payroll records if the source data is from an HRIS system.
Creating a Connection / Data Table
Before being able to import data from a source system, that source system needs to be connected to Cube. This happens through two primary constructs within Cube:
- Data Tables: Within Cube, these are the primary objects for tracking data from a source system, handling everything from flat file uploads to sophisticated integrations. Learn more about data tables.
- Connections: A connection is essentially a data table tied to a third-party system (e.g., QuickBooks Online, NetSuite).
Importing Source Dimensions
The first step to importing data for a given data table is importing the structure of that data from the source system into Cube. There are two primary JSON structures in which source dimensions can be imported, either via a "flat" JSON structure or a "nested" one. The first structure is a flat structure whereby parent/child relationships are specified via paired IDs. An example of that is shown below. Alternatively, there is a "nested" structure whereby the children for a given dimension are a sub-element of the parent JSON object.
For more specific examples and options, see the "Examples" section of the relevant endpoint in our API docs.
[
{
"name": "Account",
"id": 1,
"parent_id": null
},
{
"name": "Income",
"id": 2,
"parent_id": 1
},
{
"name": "Expense",
"id": 3,
"parent_id": 1
},
{
"name": "Assets",
"rollup_type": "BALANCE",
"id": 6,
"parent_id": 1
}
]
Note that updated dimensions should be sent to Cube relatively frequently to ensure Cube's understanding of your data's structure is as up-to-date as possible. Any new dimensions added to the data tables will need to be mapped to the corresponding Cube dimensions to support data imports in the next step.
Importing Source Data
After the dimensions are imported to the data table, the source data must be ingested tied to those dimensions. This is a multi-step process handled via a few API endpoints.
- Create an upload - This will create a container object that the actual uploaded data is associated with.
- Add parts to the upload - The parts of the upload are essentially each chunk of data that should be associated with this upload. The API has timeout limits (covered in our API docs), so depending on the amount of data wanting to be ingested, it should be split up into various, independent parts.
- Initiate the processing of the upload - This will trigger a background process in Cube to take the raw data, validate each line item, and ingest it into our system.
-
Run a data import - Now that the raw transactional data has been ingested into Cube, an import must be run to convert it into core Cube data using the relevant dimension mappings.
- Generally, these are initiated through the Cube Workspace by a user.
- For connections specifically, this can also be done via the API via this endpoint.
Steps 1 through 3 can happen as frequently as is necessary to ensure Cube has the most up-to-date data from the source system. Note that while Cube may have the latest transactional data in our system, that does not necessarily ensure the data that ends up being fetched to spreadsheets, etc., is updated.
Step 4 then takes care of taking this ingested data and importing it to the relevant spots in Cube to begin analyzing alongside all other relevant FP&A data.