by Lucas Peyrin
How it works Ever wonder how to make your workflows smarter? How to handle different types of data in different ways? This template is a hands-on tutorial that teaches you the three most fundamental nodes for controlling the flow of your automations: Merge, IF, and Switch. To make it easy to understand, we use a simple package sorting center analogy: Data Items** are packages on a conveyor belt. The Merge Node is where multiple conveyor belts combine into one. The IF Node is a simple sorting gate with two paths (e.g., "Fragile" or "Not Fragile"). The Switch Node is an advanced sorting machine that routes packages to many different destinations. This workflow takes you on a step-by-step journey through the sorting center: Creating Packages: Three different "packages" (two letters and one parcel) are created using Set nodes. Merging: The first Merge node combines all three packages onto a single conveyor belt so they can be processed together. Simple Sorting: An IF node checks if a package is fragile. If true, it's sent down one path; if false, it's sent down another. Re-Grouping: After being processed separately, another Merge node brings the packages back together. This "Split > Process > Merge" pattern is a critical concept in n8n! Advanced Sorting: A Switch node inspects each package's destination and routes it to the correct output (London, New York, Tokyo, or a Default bin). By the end, you'll see how all packages have been correctly sorted, and you'll have a solid understanding of how to build intelligent, branching logic in your own workflows. Set up steps Setup time: 0 minutes! This template is a self-contained tutorial and requires zero setup. There are no credentials or external services to configure. Simply click the "Execute Workflow" button. Follow the flow from left to right, clicking on each node to see its output and reading the detailed sticky notes to understand what's happening at each stage.
by David Olusola
📊 Google Sheets MCP Workflow – AI Meets Spreadsheets! 😄 ✨ What It Does This n8n workflow lets you chat with your spreadsheets using AI + MCP! From reading and updating data to creating sheets, it’s your smart assistant for Google Sheets 📈🤖 🚀 Cool Features 💬 Natural language commands (e.g. "Add a new lead: John Doe") ✏️ Full CRUD (Create, Read, Update, Delete) 🧠 AI-powered analysis & smart workflows 🗂️ Multi-sheet support 🔗 Works with ChatGPT, Claude, and more (via MCP) 💡 Use Cases Data Tasks: “Update status to 'Done' in row 3” Sheet Ops: “Create a ‘Marketing 2024’ sheet” Business Flows: “Summarize top sales from Q2” 🛠️ Quick Setup Import Workflow into n8n Copy the JSON In n8n → Import JSON → Paste & Save ✅ Connect Google Sheets Create a project in Google Cloud Enable Sheets & Drive APIs Create OAuth2 credentials In n8n → Add Google Sheets OAuth2 credential → Connect 🔐 Add Your Credentials Get your credential ID Open each Google Sheets node → Update with your new credential ID Link to AI (Optional 😊) MCP webhook is pre-set Plug it into your AI tool (like ChatGPT) Send test command → Watch the magic happen ✨ ✅ Test It Out Try these fun commands: 🆕 "Add entry: Jane Doe, janed@example.com" 🔍 "Read data from Sales 2024" 🧹 "Clear data from A1:C5" ➕ "Create sheet 'Budget 2025'" ❌ "Delete sheet 'Test'" 🧠 MCP Command List (AI-Callable Functions) These are the tasks the AI can perform via MCP: Add a new entry to a sheet Read data from a sheet Update a row in a sheet Delete a row from a sheet Create a new sheet Delete an existing sheet Clear data from a specific range Summarize data from a sheet using AI ⚙️ Tips & Fixes OAuth2 Errors? Re-authenticate and check scopes Confirm redirect URI is exact Permissions? Spreadsheet must be shared with edit access Use service accounts for production Webhook Not Firing? Double-check the URL Trigger it manually to test
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.
by Custom Workflows AI
Introduction The Content SEO Audit Workflow is a powerful automated solution that generates comprehensive SEO audit reports for websites. By combining the crawling capabilities of DataForSEO with the search performance metrics from Google Search Console, this workflow delivers actionable insights into content quality, technical SEO issues, and performance optimization opportunities. The workflow crawls up to 1,000 pages of a website, analyzes various SEO factors including metadata, content quality, internal linking, and search performance, and then generates a professional, branded HTML report that can be shared directly with clients. The entire process is automated, transforming what would typically be hours of manual analysis into a streamlined workflow that produces consistent, thorough results. This workflow bridges the gap between technical SEO auditing and practical, client-ready deliverables, making it an invaluable tool for SEO professionals and digital marketing agencies. Who is this for? This workflow is designed for SEO consultants, digital marketing agencies, and content strategists who need to perform comprehensive content audits for clients or their own websites. It's particularly valuable for professionals who: Regularly conduct SEO audits as part of their service offerings Need to provide branded, professional reports to clients Want to automate the time-consuming process of content analysis Require data-driven insights to inform content strategy decisions Users should have basic familiarity with SEO concepts and metrics, as well as a basic understanding of how to set up API credentials in n8n. While no coding knowledge is required to run the workflow, users should be comfortable with configuring workflow parameters and following setup instructions. What problem is this workflow solving? Content audits are essential for SEO strategy but are traditionally labor-intensive and time-consuming. This workflow addresses several key challenges: Manual Data Collection: Gathering data from multiple sources (crawlers, Google Search Console, etc.) typically requires hours of work. This workflow automates the entire data collection process. Inconsistent Analysis: Manual audits can suffer from inconsistency in methodology. This workflow applies the same comprehensive analysis criteria to every page, ensuring thorough and consistent results. Report Generation: Creating professional, client-ready reports often requires additional design work after the analysis is complete. This workflow generates a fully branded HTML report automatically. Data Integration: Correlating technical SEO issues with actual search performance metrics is difficult when working with separate tools. This workflow seamlessly integrates crawl data with Google Search Console metrics. Scale Limitations: Manual audits become increasingly difficult with larger websites. This workflow can efficiently process up to 1,000 pages without additional effort. What this workflow does Overview The Content SEO Audit Workflow crawls a specified website, analyzes its content for various SEO issues, retrieves performance data from Google Search Console, and generates a comprehensive HTML report. The workflow identifies issues in five key categories: status issues (404 errors, redirects), content quality (thin content, readability), metadata SEO (title/description issues), internal linking (orphan pages, excessive click depth), and performance (underperforming content). The final report includes executive summaries, detailed issue breakdowns, and actionable recommendations, all branded with your company's colors and logo. Process Initial Configuration: The workflow begins by setting parameters including the target domain, crawl limits, company information, and branding colors. Website Crawling: The workflow creates a crawl task in DataForSEO and periodically checks its status until completion. Data Collection: Once crawling is complete, the workflow: Retrieves the raw audit data from DataForSEO Extracts all URLs with status code 200 (successful pages) Queries Google Search Console API for each URL to get clicks and impressions data Identifies 404 and 301 pages and retrieves their source links Data Analysis: The workflow analyzes the collected data to identify issues including: Technical issues: 404 errors, redirects, canonicalization problems Content issues: thin content, outdated content, readability problems SEO metadata issues: missing/duplicate titles and descriptions, H1 problems Internal linking issues: orphan pages, excessive click depth, low internal links Performance issues: underperforming pages based on GSC data Report Generation: Finally, the workflow: Calculates a health score based on the severity and quantity of issues Generates prioritized recommendations Creates a comprehensive HTML report with interactive tables and visualizations Customizes the report with your company's branding Provides the report as a downloadable HTML file Setup To set up this workflow, follow these steps: Import the workflow: Download the JSON file and import it into your n8n instance. Configure DataForSEO credentials: Create a DataForSEO account at https://app.dataforseo.com/api-access (they offer a free $1 credit for testing) Add a new "Basic Auth" credential in n8n following the HTTP Request Authentication guide Assign this credential to the "Create Task", "Check Task Status", "Get Raw Audit Data", and "Get Source URLs Data" nodes Configure Google Search Console credentials: Add a new "Google OAuth2 API" credential following the Google OAuth guide Ensure your Google account has access to the Google Search Console property you want to analyze Assign this credential to the "Query GSC API" node Update the "Set Fields" node with: dfs_domain: The website domain you want to audit dfs_max_crawl_pages: Maximum number of pages to crawl (default: 1000) dfs_enable_javascript: Whether to enable JavaScript rendering (default: false) company_name: Your company name for the report branding company_website: Your company website URL company_logo_url: URL to your company logo brand_primary_color: Your primary brand color (hex code) brand_secondary_color: Your secondary brand color (hex code) gsc_property_type: Set to "domain" or "url" depending on your Google Search Console property type Run the workflow: Click "Start" and wait for it to complete (approximately 20 minutes for 500 pages). Download the report: Once complete, download the HTML file from the "Download Report" node. How to customize this workflow to your needs This workflow can be adapted in several ways to better suit your specific requirements: Adjust crawl parameters: Modify the "Set Fields" node to change: The maximum number of pages to crawl (dfs_max_crawl_pages). This workflow supports up to 1000 pages. Whether to enable JavaScript rendering for JavaScript-heavy sites (dfs_enable_javascript) Customize issue detection thresholds: In the "Build Report Structure" code node, you can modify: Word count thresholds for thin content detection (currently 1500 words) Click depth thresholds (currently flags pages deeper than 4 clicks) Title and description length parameters (currently 40-60 chars for titles, 70-155 for descriptions) Readability score thresholds (currently flags Flesch-Kincaid scores below 55) Modify the report design: In the "Generate HTML Report" code node, you can: Adjust the HTML/CSS to change the report layout and styling Add or remove sections from the report Change the recommendations logic Modify the health score calculation algorithm Add additional data sources: You could extend the workflow by: Adding Pagespeed Insights data for performance metrics Incorporating backlink data from other APIs Adding keyword ranking data from rank tracking APIs Implement automated delivery: Add nodes after the "Download Report" to: Send the report directly to clients via email Upload it to cloud storage Create a PDF version of the report
by Giannis Kotsakiachidis
🏦 GoCardless ⇄ Maybe Finance — Automatic Multi-Bank Sync & Weekly Overview 💸 Who’s it for 🤔 Freelancers, founders, households, and side-hustlers who work with several bank accounts but want one, always-up-to-date budget inside Maybe Finance—no more CSV exports or copy-paste. How it works / What it does ⚙️ Schedule Trigger (cron) fires every Monday 📅 (switch to Manual Trigger while testing) Get access token — fresh 24 h GoCardless token 🔑 Fetch transactions for each account: Revolut Pro Revolut Personal ABN AMRO (add extra HTTP Request nodes for any other GoCardless-supported banks) Extract booked — keep only settled items 🗂️ Set transactions … — map every record to Maybe Finance’s schema 📝 Merge all arrays into one payload 🔄 Create transactions to Maybe — POSTs each item via API 🚀 Resend Email — sends you a “Weekly transactions overview” 📧 All done in a single run — your Maybe dashboard is refreshed and you get an inbox alert. How to set up 🛠️ Import the template into n8n (cloud or self-hosted). Create credentials GoCardless secret_id & secret_key Maybe Finance API key (Optional) Resend API key for email notifications One-time GoCardless config (run the blocks on the left): /token/new/ → obtain token /institutions → find institution IDs /agreements/enduser/ → create agreements /requisitions/ → get the consent URL & finish bank login /requisitions/{id} → copy the GoCardless account_ids Create the same accounts in Maybe Finance and run the HTTP GET request in the purple frame and copy their account_ids. Open each Set transactions … node and paste the correct Maybe account_id. Adjust the Schedule Trigger (e.g. daily, monthly). Save & activate 🎉 Requirements 📋 n8n 1.33 + GoCardless app (secret ID & key, live or sandbox) Maybe Finance account & API key (Optional) Resend account for email How to customize ✨ Include pending transactions**: change the Item Lists filter. Add more banks**: duplicate the “Get … transactions” → “Extract booked” → “Set transactions” path and plug its output into the Merge node. Different interval**: edit the cron rule in Schedule Trigger. Disable emails**: just remove or deactivate the Resend node. Send alerts to Slack / Teams**: branch after the Merge node and add a chat node. Happy budgeting! 💰
by Leandro Melo
Keep your Hostinger VPS servers secure with automated backups! This n8n (self-hosted) workflow for is designed to create daily snapshots and send server metrics effortlessly, ensuring you always have an up-to-date recovery copy. Key Features: ✅ Automated Snapshots: Daily execution with zero manual intervention. ✅ Smart Replacement: Hostinger allows only 1 snapshot per VPS—the workflow automatically replaces the previous one. ✅ Notifications: Alerts via WhatsApp (Evolution API) or other configurable channels for execution confirmation. Quick Setup: Prerequisites: Install the Community Node n8n-nodes-hostinger-api and n8n-nodes-evolution-api in your n8n instance. Generate a Hostinger API Key in their dashboard: hpanel.hostinger.com/profile/api. Workflow Configuration: Add the Hostinger API credential in the first node and reuse it across the workflow. Customize the schedule (e.g., daily at 2 AM) and notification method (Evolution API for WhatsApp, email, etc.). Important Note: Hostinger overwrites the previous snapshot with each new execution, keeping only the latest version. VPS Metrics avaliables (send in messages): 🔹Status: snapshot status 🔹Date: snapshot date time 🔹Server: server name 🔹IP: external server IP ⚙️ Métrics: 🔹 Number of vCPUs 🔹 Ram usage / avaliable 🔹 Hard Disk usage / avaliable 🔹 Operational Sys and version 🔹 Uptime time (days, hours)
by Nadia Privalikhina
This n8n template offers a free and automated way to convert images from a Google Drive folder into a single PDF document. It uses Google Slides as an intermediary, allowing you to control the final PDF's page size and orientation. If you're looking for a no-cost solution to batch convert images to PDF and need flexibility over the output dimensions (like A4, landscape, or portrait), this template is for you! It's especially handy for creating photo albums, visual reports, or simple portfolios directly from your Google Drive. How it works The workflow first copies a Google Slides template you specify. The page setup of this template (e.g., A4 Portrait) dictates your final PDF's dimensions. It then retrieves all images from a designated Google Drive folder, sorts them by creation date. Each image is added to a new slide in the copied presentation. Finally, the entire Google Slides presentation is converted into a PDF and saved back to your Google Drive. How to use Connect your Google Drive and Google Slides accounts in the relevant nodes. In the "Set Pdf File Name" node, define the name for your output PDF. In the "CopyPdfTemplate" node: Select your Google Slides template file (this sets the PDF page size/orientation). Choose the Google Drive folder containing your source images. Ensure your images are in the specified folder. For best results, images should have an aspect ratio similar to your chosen Slides template. Run the workflow to generate your PDF by clicking 'Test Workflow' Requirements Google Drive account. Google Slides account. Google Slides Template stored on your Google Drive Customising this workflow Adjust the "Filter: Only Images" node if you use image formats other than PNG (e.g., image/jpeg for JPGs). Modify the image sorting logic in the "Sort by Created Date" node if needed.
by Aleksandr
This template processes webhooks received from amoCRM in a URL-encoded format and transforms the data into a structured array that n8n can easily interpret. By default, n8n does not automatically parse URL-encoded webhook payloads into usable JSON. This template bridges that gap, enabling seamless data manipulation and integration with subsequent processing nodes. Key Features: Input Handling: Processes URL-encoded data received from amoCRM webhooks. Data Transformation: Converts complex, nested keys into a structured JSON array. Ease of Use: Simplifies access to specific fields for further workflow automation. Setup Guide: Webhook Trigger Node: Configure the Webhook Trigger node to receive data from amoCRM. URL-Encoding Parsing: Use the provided nodes to transform the input URL-encoded data into a structured array. Access Transformed Data: Use the resulting JSON structure for subsequent nodes in your workflow, such as filtering, updating records, or triggering external systems. Example Data Transformation: Sample Input (URL-Encoded): The following input format is typically received from amoCRM: $json.body'leads[updatecustom_fields[id]'] Output (Structured JSON): After processing, the data is transformed into an easily accessible JSON array format: {{ $json.leads.update[‘0’].id }} This output allows you to work with clean, structured JSON, simplifying field extraction and workflow continuation. Code Explanation: This workflow parses URL-encoded key-value pairs using n8n nodes to restructure the data into a nested JSON object. By doing so, the template improves transparency, ensures data integrity, and makes further automation tasks straightforward.
by Adrian Bent
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
by Lucas Walter
Reverse engineer short-form videos from Instagram and TikTok using Gemini AI Who's it for Content creators, AI video enthusiasts, and digital marketers who want to analyze successful short-form videos and understand their production techniques. Perfect for anyone looking to reverse-engineer viral content or create detailed prompts for AI video generation tools like Google Veo or Sora. How it works This automation takes any Instagram Reel or TikTok URL and performs a forensic analysis of the video content. The workflow downloads the video, converts it to base64, and uses Google's Gemini 2.5 Pro vision API to generate an extremely detailed "Generative Manifest" - a comprehensive prompt that could be used to recreate the video with AI tools. The analysis includes: Visual medium identification (film stock, camera sensor, lens characteristics) Color grading and lighting breakdown Shot-by-shot deconstruction with precise timing Camera movement and framing details Subject description and action choreography Environmental and atmospheric details How to set up Configure API credentials: Add your Apify API key for video scraping Set up Google Gemini API authentication Set up Slack integration (optional): Configure Slack OAuth for result sharing Update the channel ID where results should be posted Access the form: The workflow creates a web form where you can input video URLs Form accepts both Instagram Reel and TikTok URLs Requirements Apify account** with API access for video scraping Google Cloud account** with Gemini API enabled Slack workspace** (optional, for sharing results) Videos must be publicly accessible (no private accounts) How to customize the workflow Modify the analysis prompt:** Edit the "set_base_prompt" node to adjust the depth and focus of the video analysis Add different platforms:** Extend the switch node to handle other video platforms Integrate with other tools:** Replace Slack with email, Discord, or other notification systems
by Yaron Been
Automatically monitor and track funding rounds in the US Fintech and Healthtech sectors using Crunchbase API, with daily updates pushed to Google Sheets for easy analysis and monitoring. 🚀 What It Does Daily Monitoring**: Automatically checks for new funding rounds every day at 8 AM Smart Filtering**: Focuses on US-based Fintech and Healthtech companies Data Enrichment**: Extracts and formats key funding information Automated Storage**: Pushes data to Google Sheets for easy access and analysis 🎯 Perfect For VC firms tracking investment opportunities Startup founders monitoring market activity Market researchers analyzing funding trends Business analysts tracking competitor funding ⚙️ Key Benefits ✅ Real-time funding round monitoring ✅ Focused industry tracking (Fintech & Healthtech) ✅ Automated data collection and organization ✅ Structured data output in Google Sheets ✅ Complete funding details including investors and amounts 🔧 What You Need Crunchbase API key Google Sheets account n8n instance Basic spreadsheet setup 📊 Data Collected Company Name Industry Funding Round Type Announced Date Money Raised (USD) Investors Crunchbase URL 🛠️ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay ahead of market movements with automated funding round tracking. Transform manual research into an efficient, automated process.
by Lucas Peyrin
How it works This template is a powerful, reusable utility for managing stateful, long-running processes. It allows a main workflow to be paused indefinitely at "checkpoints" and then be resumed by external, asynchronous events. This pattern is essential for complex automations and I often call it the "Async Portal" or "Teleport" pattern. The template consists of two distinct parts: The Main Process (Top Flow): This represents your primary business logic. It starts, performs some actions, and then calls the Portal to register itself before pausing at a Wait node (a "Checkpoint"). The Async Portal (Bottom Flow): This is the state-management engine. It uses Workflow Static Data as a persistent memory to keep track of all paused processes. When an external event (like a new chat message or an approval webhook) comes in with a specific session_id, the Portal looks up the corresponding paused workflow and "teleports" the new data to it by calling its unique resume_url. This architecture allows you to build sophisticated systems where the state is managed centrally, and your main business logic remains clean and easy to follow. When to use this pattern This is an advanced utility ideal for: Chatbots:** Maintaining conversation history and context across multiple user messages. Human-in-the-Loop Processes:** Pausing a workflow to wait for a manager's approval from an email link or a form submission. Multi-Day Sequences:** Building user onboarding flows or drip campaigns that need to pause for hours or days between steps. Any process that needs to wait for an unpredictable external event** without timing out. Set up steps This template is a utility designed to be copied into your own projects. The workflow itself is a live demonstration of how to use it. Copy the Async Portal: In your own project, copy the entire Async Portal (the bottom flow, starting with the A. Entry: Receive Session Info trigger) into your workflow. This will be your state management engine. Register Your Main Process: At the beginning of your main workflow, use an Execute Workflow node to call the Portal's trigger. You must pass it a unique session_id for the process and the resume_url from a Wait node. Add Checkpoints: Place Wait nodes in your main workflow wherever you need the process to pause and wait for an external event. Trigger the Portal: Configure your external triggers (e.g., your chatbot's webhook) to call the Portal's entry trigger, not your main workflow's trigger. You must pass the same session_id so the Portal knows which paused process to resume. To see it in action, follow the detailed instructions in the "How to Test This Workflow" sticky note on the canvas.