Importing CSV and XLSX files to your Betty Blocks application
What is the Import CSV/XLS(X) step?
The Import CSV/XLS(X) step does what the name says: it imports .CSV or Excel file data to your application, the action step can be found in the Block Store.
Uploading file
Normally the files are sent in the front-end of the application, if you want to learn how to upload documents to your app using a create-form, please visit our article here.
For the sake of simplicity, we will insert the file the public files of the application
In this use case, we’ll be using a .CSV file with mock data from the https://www.mockaroo.com/ website, feel free also to use it, the columns we created were:
- First_name
- Last_name
- Ip_adress
After creating the mock data, you need to upload your file to the application, which is done as follows:
You will first need to copy the link. The link will be used in the ‘Import CSV XSL(X) file’ action, which can be found on the public files page:
Model setup
To save it to your application, you need to create a model where the data is going to be saved, the model will contain the same properties as the columns in your .CSV file, and ours will have the same as our mock data file:
- First_name
- Last_name
- IP_address
Action setup
Now your model has been created, you can start setting up your Import CSV/XLS(X) action We will do so inside an Action button to make the testing easier.
1. Create your Action button and put the action inside it (We’re using an Action button to make the action step functionality easier)2. First, you insert the file URL, here we will be using the link of the file we saved in the public files.
4. Now you choose which model you want to save your CSV data in, here you will select the model we just created
5. Now you map the field names in your CSV file to the properties of the model you created. There are 3 different ways of mapping:
- First, we will be using import mapping where: KEY = import column name, VALUE = database name (in snake_case) of the property. For example:
- Second, default mapping where: KEY = database name of the property or relation, VALUE = the property value or ID of the relational record.
- Third, A key/value combination to allow you to specify if a column is a checkbox, decimal, number, or a specific date format.
Text fields will all work out of the box and should not be included here.
The key column specifies the CSV column name. The value column specifies either the word "checkbox", "decimal", "number", "Date", "Time", or "Datetime". For the Date, Time, and Datetime options, you will need to specify how the item is formatted in your file. The notation is as follows: Date, date-fns format or Time, date-fns format, or datetime, date-fns format. For example: "Date, dd-MM-yyyy" or "Date, MM/dd/yyyy". We use the formats defined by the date-fns package and the specifications can be found here: https://date-fns.org/v2.16.1/docs/format Make sure a mapping exists for each of your date columns in the import file, otherwise the column will not be imported correctly.
See the examples below for the supported patterns.
6. Now you choose by which property your data is indexed into the models (Here you select the column name in the CSV file, and its data type) we will use the first_name in this example.
Your action step is now done, now we will explain the extra options:
Update import mapping for existing records
It is the same principle as the IMPORT MAPPING, but these mappings will only be used for existing records. For new records, the IMPORT MAPPING options will be used.
This should be left empty if you do not want to distinguish the columns to import between creates and updates.
Batch options
- Turn on batched processing for the import.
When on, the system will use multiple calls (batches) to process the import and store the progress in the model/properties selected below. This can be useful if you have huge amounts of data and the import can't finish in a single run.
- The model to store the batch size and the current offset while running the import
- The (number) property to store the size of each batch
- Batch size defines the number of records to process in a batch. We'll set it to a max of 5000
-
The offset property is a number property that keeps track of the current position in the dataset during a batched import
-
The file name property stores the imported file’s name to uniquely identify and track the import process.
- Turn on logging for this action:
When selected, debug information will be logged into the logs page.
- Validate required columns
When set to true, add a * after the name in the KEY fields of the Import Mapping to the columns that you would like to validate as required (for example first_name*).
- Result
This is the (Text) output of the function and will contain the number of records that have been created and the number of records updated in the following format:
records created: 1, records updated: 1
Run action
Now we run the action, here we will save the CSV file by just clicking the Action button we created earlier. But to better see the result we will also create a DataTable showing the information in the model you created (the Client model in this case).
To see the result instantly we will add the following interaction to the Action button:
After adding the interaction to your Action button, we will compile the page. After compiling it, you will see the result after clicking the button.
So now you have saved your CSV information into your application and know how to use the Import CSV files into your models. For a more detailed and technical explanation of this action step please visit the GitHub repository here.
To see our article on exporting CSV files from data models, please visit this article.