Locoia
  • Overview
  • Account and User Settings
    • User types
    • Adding Users
    • Teams
    • Access Permissions
    • 2 Factor Authentication 2FA
    • Versioning and Snapshots
    • Activity Log
  • Reset your Password
  • Invoices and Payments
  • Automation
    • Flow Builder
      • Flow Building Best Practices
      • Jinja Template Language
        • Jinja: (Custom) variables, wildcards and functions
        • Magic Code Samples
      • Connectors & APIs
        • Titles and References
        • Referencing data of objects, lists, arrays - how to pass data dynamically
        • Accessing Objects with JSONPath
        • Merging nested JSON objects
        • Parsing JSONs from String
        • Response Headers & Status Codes
        • Custom Data Fields
        • Wildcard API calls and actions
        • Response cleaning
      • Text Strings, Date & Time, Numbers and Currencies
        • Text and Strings
        • Dates & Time
        • Numbers (Thousand Separators, Currencies)
      • Email-formatting
      • Code Fields
      • Running single Flow steps
      • Flow run data retention, logging, and error notifications
      • Advanced View
      • Dynamic Title
      • Custom Error Handling
      • Error Handling Flows
      • Automatic Pagination
    • Flow Debugger
      • Automatic Retrying
      • Run Flows again
      • Troubleshooting Flows
    • Community Library
  • Connectors & Helpers
    • Connectors
      • Monday.com
      • ActiveCampaign
      • Aircall
      • Allthings
      • Amplitude
      • Animus
      • Assetti
      • Awork
      • AWS RDS Database - How to connect
      • bubble.io
      • Casavi
      • Chargebee
      • CleverReach
      • comgy
      • commercetools
      • Everreal
      • Exact Online
      • Facebook Marketing
      • Fahrländer Partner
      • FastBill
      • FILESTAGE.io
      • Freshdesk
      • Freshsales
      • Google Ads
      • Google Ads Lead Form
      • Google Analytics
      • Google Chat
      • Google Drive
      • Google Sheets
      • Gmail
      • HubSpot
      • Heyflow
      • iDWELL
      • ImmobilienScout24
      • Instagram Ads
      • Intercom
      • klaviyo
      • Kiwi Opening Doors
      • Klenty
      • Klipfolio
      • Kolibri CRM
      • konfipay
      • KUGU
      • Shopify
      • S3 AWS
      • SQS AWS
      • Lambda AWS
      • Learnster
      • lexoffice
      • LineMetrics
      • Linkedin
      • Locoia
      • Notion
      • MailGun
      • Makula
      • Microsoft Dynamics 365
      • Microsoft OneDrive
      • MixPanel
      • MongoDB
      • Odoo
      • OnFleet
      • OnOffice
      • Oracle NetSuite
      • Outbrain
      • Quickbooks
      • Trello
      • PandaDoc
      • Personio
      • Pinterest Ads
      • Pipedrive
      • Plentific
      • PriceHubble
      • relay
      • REALCUBE
      • Sage ERP
      • Salesforce
      • SAP
      • Scoro
      • Seafile
      • sevDesk
      • SharePoint
      • SharpSpring
      • Slack
      • Snapchat Marketing
      • Snowflake
      • Teamleader Focus
      • Teamwork.com
      • Tableau
      • TikTok
      • TinQwise
      • The Trade Desk
      • Twitter
      • Typeform
      • WordPress
      • Xero
      • Youtube
      • Zendesk
      • Zoho CRM
      • Zoom
    • Helpers
      • Scheduler
      • Webhook
      • Dict Helper
      • Spreadsheet Helper
      • REST Helper
      • Boolean Helper
      • Multi Case Helper
      • Looper
      • FTP Helper
      • CSV Helper
      • XLSX Helper
      • Mail Sender
      • Flow Trigger
      • File Storage Helper
      • Terminate Helper
      • Delay Helper
      • SQL Connector
      • PDF Helper
      • Zip Helper
      • Data Warehouse Helper
      • XML Helper
      • Form Helper
      • Arrow
      • Error Arrow
    • Authentication Types Available
      • Setting up authentication
      • OAuth1
      • OAuth2
      • Refreshable token
      • AWS Signature
      • Basic Auth and Other Simple Authentication Methods
      • How are API versioning and API updates handeled?
      • Custom OAuth2 clients (apps)
    • Building Connectors
      • Base Connector Setup
        • Connector Auth Validation
        • GraphQL APIs
        • Rendering with User Input
      • Building Connector Actions
        • Actions Examples
      • Search Automation
      • Pagination Automation
      • Uploading Files in Actions
      • Working with SOAP APIs
    • Super Actions
    • Webhook Trigger for Connectors
    • Data Mapping and Env Variables
  • Embed - White Label Portal
    • Embed Overview
      • 1. Embed Flow
        • 1.1 Creating Embed Flows
        • 1.2 Updating Embed Flows
        • 1.3 Embed Error Handling
        • 1.4 Setting up Callbacks for Integration activation/deactivation
        • 1.5 Setting up Remote search
        • 1.6 Setting up End User logs
      • 2. Configure Embed
        • 2.1 Embed Integration via SSO
        • 2.2 Proprietary connector setup
        • 2.3 Sharing level
        • 2.4 Consent screen
        • 2.5 Account Secrets
        • 2.7 Further settings
      • 3. Integrate Embed
        • 3.1 iframe vs native embed
        • 3.2 Customizing CSS
        • 3.3 Events emitted from iframe to parent window
      • 4. Embed for End User
        • 4.1 Embed Remote Search
        • 4.2 Embed End User Logs
      • 5. Test Embed Configuration
        • Testing example
      • 6. Embed Integrations and Connector Auths
    • Embed FAQs
  • Data and Dashboards
    • Dashboards & Insights
      • Introduction to Dashboards
      • Introduction to Insights
      • Introduction to Data Sources
      • Dashboard Filters
      • Insight Marketplace - Using Pre-Built Insights
      • Writing SQL Queries
      • Useful SQL Examples
      • Charts
        • Line Chart
        • Bar and Horizontal Bar Chart
        • Stat Card
        • Pie Chart
        • Gauge Chart
        • Donut Chart
        • Stacked Bar, Horizontal Stacked Bar, and Normalized Horizontal Stacked Bar
        • Multiple Line Chart
        • Pivot Table
        • Map Chart
  • Best Practice Guides
    • Integration Best Practices
    • Integration Check List
    • CSV Files in Excel
    • Multi-Tenant Flows
    • On-Premise Integrations
    • Database Connection Setup
    • Data and General Security
    • Using Tags
    • FAQ
  • API
    • Locoia API Authentication - Personal Access Token
    • Create Connector Authentication
  • Contact us
  • Status of Service
  • Data Privacy
  • Imprint
Powered by GitBook
On this page
  • Getting started with the basics
  • Flow status
  • Flow triggers
  • Naming flows
  • Default values
  • Further best practices
  • Advanced best practices
  • Splitting large Flows into Sub-Flows
  • Preparing large datasets before looping over them
  • Changing large amounts of dictionaries from a list

Was this helpful?

  1. Automation
  2. Flow Builder

Flow Building Best Practices

Learn how to build robust and scalable flows.

PreviousFlow BuilderNextJinja Template Language

Last updated 8 months ago

Was this helpful?

Getting started with the basics

Flow status

Once you start building a new flow, its status is always draft. In this mode, you can build, debug and test. Once ready, set the status to active. Once a flow is set to active, the scheduler will actively schedule and execute flows. This is not the case in draft or paused. So the status active should always be used for all actively running flows. If a flow throws multiple errors or you want to actively prevent it from executing use the status paused. You can change the status in settings:

Flow triggers

There are several ways to trigger (run) a Flow:

Naming flows

Always include your initials MM for Max Miller as well as a date or version in the flow name: "Salesforce contacts sync with Zendesk MM 04.03.2020", this makes it easy for everyone in your team to quickly recognize who built the flow and whether it's up to date.

Default values

In order to easily check for variable existence and default to an empty string, use this pattern:

{{ property.country | default() }}

This defaults to an empty string if the field/value for country doesn’t exist.

{{ property.country | default("Germany") }}

This defaults to Germany. This is particularly useful when dealing with an ERP System for example, and your Sales team only filled the country field when it's not Germany.

Further best practices

  • Include some comments in the description of the Flow under the edit button for everyone to know what the flow is about

  • Edit each individual Connector description text on the workspace to tell others (or yourself when you return a month later) what each step does

  • Improve the short references (e.g. zen1 for Zendesk); ideally, rename this to zendeskContacts or zendesk_contacts, depending on your preference of camel case vs. snake case

  • Once a flow is ready to be deployed, set the status to "live" to let everyone know it is ready to go

  • Reduce the flow in its components and try running (debugging) one step at a time rather than building everything at once. Once the first steps work, add in more functionality.

  • Initially, use test-data sets that are rather small, just to test the functionality of each step properly. For CSV, XML, or JSON files you can e.g. upload test data to AWS S3, Google Drive, or Dropbox and use them. If you query a REST API, try filtering e.g. by date range to keep the data set small.

  • If you send emails, only send them to yourself for testing purposes.

  • If you need more space, you can zoom in and out by pressing the CMD key on a Mac or the CTRL key on a PC and zoom in or out with your mouse.

  • You can copy individual connectors just by CMD/CTRL + C and CMD/CTRL + V after clicking on a connector. It will copy it including all its settings.

  • CMD/CTRL + Z and CMD/CTRL + Shift + Z can be used to revert graphical adjustments in Flows (moving connectors and helpers on the flow builder canvas)

Advanced best practices

These best practices are most likely not relevant when you get to know the app at first, however, they will come in handy once you build larger flows, such as multi-entity import flows.

The maximum data of each step output is 50MB, in case this limit is reached, the step will fail with a message pointing out the size. Talk to us if this limit is not sufficient for your use case.

Splitting large Flows into Sub-Flows

This has multiple advantages:

  • Different parts of the Flow are clearly separated, which makes it easier to see at a glance what each Sub-Flow does and to troubleshoot it

  • You are able to reuse Sub-Flows across multiple Flows, which reduces maintenance and prevents differences across Flows (e.g. you have to do the same 10 steps for multiple different Flows, instead of copying the steps into each Flow, you create a separate Flow for it, which will be triggered by the other Flows)

  • Performance of your Flows can be increased if you trigger Sub-Flows "async" (e.g. within a loop), as then multiple Sub-Flows can run in parallel

Preparing large datasets before looping over them

Example: Mapping of multiple flat files

A typical importing scenario is that data from flat files (e.g. CSV files) should be imported to a system via a regular Connector.

However, the data that should be sent in a single API call might be spread across multiple flat files, thus some mapping needs to happen in order to import the data to the system.

Let's suppose the API expects data in the following format, where the main dictionary is company data and inside of it, a list of company addresses should be present:

{
  "id": "...",
  "name": "...",
  ...,
  "addresses": [
    {
      "street": "..."
      ...
    },
    {
      ...
    }
  ]
}

One could achieve this by building a Flow like this:

The following is happening there:

  1. Data is read from the CSV files

  2. A loop iterates over the companies

  3. Addresses are filtered for the addresses of the company

  4. Another loop loops over the addresses themselves and prepares them in the expected format

  5. Inside the last step, the company data is prepared, with a reference to the address loop and a request to the API is sent in order to create the company

Instead, this should happen directly in a Dict Helper with the Define Variables action before the loop:

Inside the Prepare Company Data step the following is happening:

[
  {% for company in companies %}
    {
      "id": "{{ company.id }}",
      "name": {% if probe(company, 'company_name') %}"{{ company.company_name }}"{% else %}None{% endif %},
      ...,
      "addresses": [
        {% for address in companies_addresses | selectattr('company_id', '==', company.id) %}
            {
              "street": {% if probe(address, 'street_1') %}"{{ address.street_1 }}"{% else %}None{% endif %},
              ...
            }
          {% if not loop.last %},{% endif %}
        {% endfor %}
      ]
    }
    {% if not loop.last %},{% endif %}
  {% endfor %}
]

Which results in a list like this:

[
  {
    "id": "...",
    "name": "...",
    ...,
    "addresses": [
      {
        "street": "..."
        ...
      },
      {
        ...
      }
    ]
  },
  {
    ...
  }
]

Now, the loop simply loops over this list and the Create Company step only has to reference to {{ company }}, as the data has been prepared before the loop already.

The Dict Helper step will, depending on the data size, take a bit longer, however, this additional time is quickly gained by shaving off multiple seconds for each loop iteration.

In one similar production Flow, the Flow speed could be increased 14 times, which cut flow run time for an initial import from more than 25 hours to less than 2 hours.

Changing large amounts of dictionaries from a list

In some cases, you might want to do some kind of data cleaning, where you e.g. change the values of a few columns, create the hash of entire rows, etc.

For the Dict Helper option, you can create a Jinja loop similar to this one with the "Define variables" action:

[
  {% for row in data %}
    {
      "field_1": "{{ row.field_1 | some_data_cleaning }}",
      "field_2": "{{ row.field_2 | some_data_cleaning }}"
    }
    {% if not loop.last %},{% endif %}
  {% endfor %}
]

For the Spreadsheet Helper option, you can make use of regular SQL.

Manual trigger - Clicking the "Run Now" button on the (visible in the screenshot above)

Schedule trigger - via

Webhook trigger - via or

Flow triggering another flow - via

Look at other flows you already built or find in a template on the .

Once a Flow is ready to run, in the Flow Builder edit dialog, include a list of comma-separated email addresses you want to be notified in case an error occurs during execution.

Once your Flow grows to cover more edge cases, do more complicated imports, or handle a variety of data sources, you should think about splitting your Flow into smaller Sub-Flows using the .

Often it makes sense to do some data preparation with Helpers that handle large datasets with ease such as the or a Jinja Loop with the , before looping over data and doing the same operations for each iteration instead of once beforehand.

This means that for each company, the entire company's address data is first loaded again, to be then filtered and finally looped over. For one iteration this might take only a few seconds (dependent on the size of the company address data file; for large files, some performance improvements can be already gained by using the ), but this quickly adds up for each iteration.

Instead of using the and a , we recommend using either a Dict Helper and building a Jinja Loop in there or the , as both can handle large amounts of data in a more performant way than the Looper.

Flow Builder
Schedule Helper
Webhook Helper
Connector Webhook Trigger
Flow Trigger
Flow Template Marketplace
Flow Trigger Helper
Spreadsheet Helper
Dict Helper's Define Variables action
Looper Helper
Dict Helper
Spreadsheet Helper's Query Spreadsheet action
Spreadsheet Helper's Query Action
Copy-pasting a step
Unoptimized Flow
Unoptimized Flow