What is Logic & Flow Control?
Logic & Flow Control is the brain of your automation pipeline. It's what transforms a simple linear sequence of steps into a sophisticated, production-ready automation that can handle real-world complexity. With conditional branching, loops, delays, error handling, and parallel execution, you build workflows that adapt to different situations and recover gracefully from failures.
All logic is configured visually in the Visual Workflow Builder — no code required. You drag logic nodes onto the canvas, set conditions through configuration panels, and wire branches by connecting edges. The result is a workflow that's both powerful and easy to understand.
Why Flow Control Matters
Without logic, an automation is just a fixed script. It does the same thing every time, and if anything goes wrong, it stops. With flow control, your automations can:
Make decisions based on extracted data, API responses, or external conditions
Process collections by iterating over lists of items, pages, or records
Recover from errors with try/catch blocks and fallback paths
Respect rate limits with configurable delays between actions
Run tasks in parallel to complete faster
Conditional Branching
Conditional branching lets you route your workflow down different paths based on data values. The simplest form is an if/else node: if a condition is true, execution follows one path; if false, it follows another.
You can chain multiple conditions to create multi-branch routing — for example, routing scraped products to different Google Sheets based on their category. Conditions support standard comparison operators (equals, contains, greater than, less than, regex match) and can reference any variable from earlier in the workflow.
For complex decision trees, nest conditions inside each other. The Visual Workflow Builder renders each branch clearly so you can trace the logic visually.
Example: Price Monitoring
Imagine a workflow that scrapes product prices daily using Data Extraction. A conditional branch checks: is the price below your target threshold? If yes, send a Slack notification and log the deal. If no, just update the tracking spreadsheet. This kind of conditional logic turns a simple scraper into an actionable monitoring system.
Loops & Iteration
Loops let you process collections of items through the same pipeline. The for-each loop takes a list — say, a collection of URLs or a set of extracted records — and runs the downstream nodes once for each item.
While loops repeat until a condition is met, which is useful for pagination: keep clicking "Next Page" and extracting data until there are no more pages. Combine loops with Data Extraction for bulk scraping workflows that handle hundreds or thousands of pages automatically.
You can also nest loops. For example, loop through a list of search queries, and for each query, loop through the paginated results. The variable system ensures each iteration has access to the current item's data.
Error Handling
Real-world automations encounter errors: pages fail to load, selectors change, APIs return unexpected responses. Error handling nodes let you define what happens when something goes wrong.
Try/catch blocks wrap a section of your workflow. If any node inside the "try" path fails, execution jumps to the "catch" path instead of stopping the entire workflow. In the catch path, you might log the error, send an alert, skip the current item and continue, or try an alternative approach.
Retry logic automatically re-attempts failed steps with configurable delays between retries. This handles transient issues like network timeouts or rate limiting responses. Combine retries with delays to respect target site limits.
Delays & Rate Limiting
Delays add pauses between workflow steps. Fixed delays wait a specific number of seconds — useful for letting pages load or respecting API rate limits. Random delays add variability to make automation patterns less predictable and more human-like, which helps avoid bot detection when working with Browser Automation.
For API-heavy workflows using HTTP requests, rate limiting nodes ensure you don't exceed the target service's request limits. Configure maximum requests per minute or second, and the workflow automatically throttles itself.
Variable System
Variables are how data flows between nodes in a workflow. When a node produces output — extracted data, an API response, a transformed dataset — it stores the result in a named variable. Downstream nodes reference that variable to use the data.
Variable references use the ${variableName} syntax. You can reference variables in any text field across the workflow. The system preserves types: arrays stay arrays, objects stay objects, numbers stay numbers. This means you can pass a collection from an extraction node directly into a loop without any conversion.
Variables persist across the entire workflow execution, so you can reference data from early steps in later steps regardless of how many nodes are in between. For complex transformations, use Data Processing nodes to reshape, filter, or combine variables.
Parallel Execution
Parallel branches let you run multiple paths simultaneously. This is useful when you need to perform independent tasks — for example, scraping data from three different websites at the same time, or sending notifications to multiple channels while also saving to a database.
When all parallel branches complete, a merge node combines the results. You control the concurrency level to manage resource usage, and timeout settings ensure no single branch blocks the others indefinitely.
Parallel execution can significantly speed up workflows that involve multiple independent data sources or output destinations. See the templates library for examples of parallel scraping and multi-channel notification workflows.