The more data from source systems imported into Cube, the more useful it becomes. Cube's flexibility allows for data of various sorts to be imported. All source data is tied to what is known as a data table within Cube. Data tables have to main data constructs associated with them:
- Dimensions: These define the structure of the data from the source system. Dimensions will end up being mapped to your dimensions within Cube itself to determine where that data lives as part of the data import process.
- Transactions: The transactions represent the raw data from the source system. For General Ledgers / ERPs this is typically journal entries / line items, but it could also be payroll records if the source data is from an HRIS system.
Creating a Connection / Data Table
Before being able to import data from a source system, that source system needs to be connected to Cube. The happens through two primary constructs within Cube:
- Data Tables: Within Cube these are the primary objects for tracking data from a source system, handling everything from flat file uploads to sophisticated integrations.
- Connections: A connection is essentially a data table tied to some third party system (i.e. QuickBooks Online, NetSuite).
Importing Source Dimensions
The first step to importing data for a given data table is importing the structure of that data from the source system into Cube. There are two primary JSON structures in which source dimensions can be imported, either via a "flat" JSON structure or a "nested" one. The first structure is a flat structure whereby parent / child relationships are specified via paired IDs. An example of that is shown below. Alternatively, there is a "nested" structure whereby the children for a given dimension is a sub-element of the parent JSON object.
See the "Examples" section of the relevant endpoint in our API docs for more specific examples and options.
Note that updated dimensions should be sent to Cube on a relatively frequent basis to ensure Cube's understanding of the structure of your data is as up to date as possible. Any new dimensions added to the data tables will need to be mapped to a corresponding Cube dimensions to support data imports in the next step.
Importing Source Data
Once dimensions are imported to the data table, the source data must be ingested tied to those dimensions. This is a multi-step process handled via a few API endpoints.
- Create an upload - This will create a container object that the actual uploaded data itself is associated with.
- Add parts to the upload - The parts of the upload are essentially each chunk of data that should be associated with this upload. The API has time out limits (covered in our API docs), so depending on the amount of data wanting to be ingested it should be split up in to various, independent parts.
- Initiate the processing of the upload - This will trigger a background process in Cube to take the raw data, validate each line item, and ingest it in to our system.
- Run a data import - Now that the raw transactional data has been ingested in to Cube, an import must be ran to convert it in to core, cube data using the relevant dimensions mappings.
- Generally these are initiated through the Cube web portal by a user.
- For connections specifically, this can also be done via the API via this endpoint.
Steps 1 through 3 can happen as frequently as is necessary to ensure Cube has the most up to date data from the source system. Note that while Cube may have the latest transactional data in our system, that does not necessarily ensure the data that ends up being fetched to spreadsheets, etc is updated. Step 4 then takes care of taking this ingested data and importing it to the relevant spots in Cube to begin analyzing alongside of all other relevant, FP&A data.