by Sven Rösser
Overview This workflow provides a universal webhook endpoint that dynamically routes incoming requests to different subflows. It allows you to manage multiple API-like endpoints from a single entry point, while ensuring proper error handling and consistent responses. How it works Webhook Receiver – A single URL accepts requests. Method Detection – Branches capture the request method. Route Resolver – Matches the action parameter and method against your route configuration. Execute Subflow – If valid, the matching workflow is executed. Error Handling – If invalid, the workflow responds with a clear status code and JSON error. About the action parameter The action query parameter is the key that controls routing: In your Routes Config, every route is defined with an action name, a list of allowed HTTP methods, and the target subflow ID. When a request comes in, the workflow looks up the provided action in this config. If the action is valid and the method is allowed, the corresponding subflow is executed. If not, the workflow returns a structured error. In other words: Config side** → map action → subflow ID Request side** → send ?action=... → determines which subflow runs This makes action both the mapping key in the configuration and the control key for triggering the correct logic. Setup steps Import the workflow into n8n. Define your routes in the Routes Config node. Each route contains: action name allowed HTTP methods target subflow ID This workflow is useful if you want to: Expose multiple clean API endpoints without creating many Webhook nodes Ensure consistent error handling across all endpoints Keep your n8n setup more structured and maintainable 👉 A practical solution to turn n8n into a flexible and maintainable API gateway.
by Yaron Been
This workflow provides automated access to the Black Forest Labs Flux Kontext Pro AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for image generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete image generation process using the Black Forest Labs Flux Kontext Pro model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: A state-of-the-art text-based image editing model that delivers high-quality outputs with excellent prompt following and consistent results for transforming images through natural language Key Capabilities High-quality image generation from text prompts** Advanced AI-powered visual content creation** Customizable image parameters and styles** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Black Forest Labs/flux-kontext-pro AI model Black Forest Labs Flux Kontext Pro**: The core AI model for image generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Image Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Content Creation**: Generate unique images for blogs, social media, and marketing materials Design Prototyping**: Create visual concepts and mockups for design projects Art & Creativity**: Produce artistic images for personal or commercial use Marketing Materials**: Generate eye-catching visuals for campaigns and advertisements Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #imagegeneration #aiart #texttoimage #visualcontent #aiimages #generativeart #flux #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Sabrina Ramonov 🍄
Description This automation publishes to 9 social platforms daily! Manage your content in a simple Google Sheet. When you set a post's status to "Ready to Post" in your Google Sheet, this workflow grabs your image/video from your Google Drive, posts your content to 9 social platforms, then updates the Google Sheet post status to "Posted". Overview 1. Trigger: Check Every 3 Hours Check Google Sheet for posts with Status ""Ready to Post"" Return 1 post that is ready to go 2. Upload Media to Blotato Fetch image/video from Google Drive Upload image/video to Blotato 3. Publish to Social Media via Blotato Connect your Blotato account Choose your social accounts Either post immediately or schedule for later Includes support for images, videos, slideshows, carousels, and threads Setup Sign up for Blotato Generate Blotato API Key by going to Settings > API > Generate API Key (paid feature only) Ensure you have "Verified Community Nodes" enabled in your n8n Admin Panel. Install "Blotato" community node. Create credential for Blotato. Connect your Google Drive to n8n: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service Copy this sample Google Sheet. Do NOT change the column names, unless you know what you're doing: https://docs.google.com/spreadsheets/d/1v5S7F9p2apfWRSEHvx8Q6ZX8e-d1lZ4FLlDFyc0-ZA4/edit Make your Google Drive folder containing images/videos PUBLIC (i.e. Anyone with the link) Complete the 3 setup steps shown in BROWN sticky notes in this template Troubleshooting Checklist your Google Drive is public column names in your Google Sheet match the original example file size < 60MB; for large files, Google Drive does not work, use Amazon S3 instead 📄 Documentation Full Tutorial Troubleshooting Check your Blotato API Dashboard to see every request, response, and error. Click on a request to see the details. Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions
by Davide
This workflow creates a user-friendly web form to upload a file, which allows users to upload a single large file (up to 5Gb) through a custom web form and automatically send it via TransferNow, handling the complex multi-part upload process required for large files. Advantages ✅ No manual steps: The entire process from file upload to email delivery is fully automated. ✅User-friendly: Anyone can upload files via a simple web form, without needing to access TransferNow directly. ✅Supports large files: TransferNow’s API handles large files that are not suitable for email attachments. ✅ Secure file delivery: The workflow uses TransferNow’s secure, expiring download links. ✅Customizable: You can easily adjust the workflow to support multiple file types, multiple recipients, or different validity rules. ✅Scalable: Works for individuals, teams, or businesses that frequently need to share large documents. How It Works The workflow is triggered when a user submits the embedded web form. Here is the process: Form Trigger: A user accesses the form, fills in the required details (Title, Message, Recipient Email), and uploads a single PDF file. Submitting the form starts the workflow. File Processing: The workflow calculates the size of the uploaded file, which is a necessary parameter for the TransferNow API. Transfer Creation: It sends a request to the TransferNow API to create a new file transfer. The API responds with details needed for the upload, including a unique transferId and uploadId. Upload URL Retrieval: The workflow requests a pre-signed upload URL from TransferNow for the specific part of the file. File Upload: The binary file data from the form and the upload URL from the previous step are merged. The workflow then performs a direct PUT request to the secured TransferNow URL to upload the file's binary content. Upload Confirmation: After the upload, the workflow informs the TransferNow API that the file part upload is complete. Finalization: Once the entire upload is confirmed, the workflow finalizes the transfer on TransferNow's side. Data Retrieval & Response: The workflow fetches the final transfer data, constructs a public download URL, and sends a success message back to the user's browser, displaying the recipient's email and the download link. Set Up Steps To use this workflow, you need to configure the connection to the TransferNow API. Get TransferNow API Credentials: Create a free account on TransferNow developer portal to get your API key (a 14-day free trial is available). Configure Credentials in n8n: In the n8n editor, locate the HTTP Request nodes named "Set Transfer", "Get Upload Url", etc. These nodes use a credential called "Header Auth TransferNow". You need to create this credential. Go to Credentials > Add Credential and select "HTTP Header Auth". Give it a name (e.g., "TransferNow API Key"). In the Name field, enter x-api-key. In the Value field, paste your personal TransferNow API key. Save the credential. The existing nodes will automatically use it, or you can select it from the dropdown in each node's credentials section. Activate the Workflow: Save the workflow and click the Activate toggle to make it live. Once activated, the On form submission node will provide a unique public URL for your form. Share this URL with users to start uploading and sending files. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by moosa
This guide explains how to send form data from n8n to a JotForm form submission endpoint using the HTTP Request node. It avoids the need for API keys and works with standard multipart/form-data. 📌 Overview With this workflow, you can automatically submit data from any source (Google Sheets, databases, webhooks, etc.) directly into JotForm. ✅ Useful for: Pushing information into a form without manual entry. Avoiding API authentication. Syncing external data into JotForm. 🛠 Requirements A JotForm account. An existing JotForm form. Access to the form’s direct link. Basic understanding of JotForm’s field naming convention. ⚙️ Setup Instructions 1. Get the JotForm Submission URL Open your form in JotForm. Go to Publish → Quick Share → Copy Link. Example form URL: sample form Convert it into a submission endpoint by replacing form with submit: Example: submit url 2. Identify Field Names Each JotForm field has a unique identifier like q3_name[first] or q4_email. Steps to find them: Right-click a field in your published form → choose Inspect. Locate the name attribute in the <input> tag. Copy those values into the HTTP Request node in n8n. Example mappings: First Name → q3_name[first] Last Name → q3_name[last] Email → q4_email 3. Configure HTTP Request Node in n8n Method:** POST URL:** Your JotForm submission URL (from Step 1). Content Type:** multipart/form-data Body Parameters:** Add field names and values. Example Body Parameters: { "q3_name[first]": "John", "q3_name[last]": "Doe", "q4_email": "john.doe@example.com" } 4. Test the Workflow Trigger the workflow (manually or with a trigger node). Submit test data. Check JotForm → Submissions to confirm the entry appears. 🚀 Use Cases Automating lead capture from CRMs or websites into JotForm. Syncing data from Google Sheets, Airtable, or databases. Eliminating manual data entry when collecting responses. 🎛 Customization Tips Replace placeholder values (John, Doe, john.doe@example.com) with dynamic values. Add more fields by following the same naming convention. Use n8n expressions ({{$json.fieldName}}) to pass values dynamically.
by Viktor Klepikovskyi
Nested Loops with Sub-workflows Template Description This template provides a practical solution for a common n8n challenge: creating nested loops. While a powerful feature, n8n's standard Loop nodes don't work as expected in a nested structure. This template demonstrates the reliable workaround using a main workflow that calls a separate sub-workflow for each iteration. Purpose The template is designed to help you handle scenarios where you need to iterate through one list of data for every item in another list. This is a crucial pattern for combining or processing related data, ensuring your workflows are both clean and modular. Instructions for Setup This template contains both the main workflow and the sub-workflow on a single canvas. Copy the sub-workflow part of this template (starting with the Execute Sub-workflow Trigger node) and paste it into a new, empty canvas. In the Execute Sub-workflow node in the main workflow on this canvas, update the Sub-workflow field to link to the new workflow you just created. Run the main workflow to see the solution in action. For a detailed walkthrough of this solution, check out the full blog post
by Patrick Campbell
PDF to Markdown Converter (LlamaCloud) Description: How it works This workflow extracts structured content from complex PDFs using LlamaCloud's advanced parsing engine: Download PDF – Retrieves your PDF from Google Drive (or any source) Upload to LlamaCloud – Sends the PDF to LlamaCloud's parsing API and receives a job ID Poll for completion – Automatically checks parsing status every 30 seconds until complete Retrieve markdown – Fetches the clean, structured markdown output with preserved tables, layouts, and formatting The workflow handles complex PDFs with multi-column layouts, tables, and embedded images that traditional parsers struggle with. Set up steps Time estimate: ~5 minutes You'll need to configure one main integration: LlamaCloud API key – Sign up at cloud.llamaindex.ai, generate an API key, and create a Generic Header Auth credential in n8n with Authorization: Bearer YOUR_API_KEY Google Drive OAuth (optional) – Connect your Google account if using the Drive node, or replace with any PDF source Once configured, the workflow automatically handles parsing, retry logic, and markdown extraction. Output is ready for AI processing or content transformation workflows.
by Antonio Gasso
Overview Stop manually creating folder structures for every new client or project. This workflow provides a simple form where users enter a name, and automatically duplicates your template folder structure in Google Drive—replacing all placeholders with the submitted name. What This Workflow Does Displays a form where users enter a name (client, project, event, etc.) Creates a new main folder in Google Drive Calls Google Apps Script to duplicate your entire template structure Replaces all {{NAME}} placeholders in files and folder names Key Features Simple form interface** — No technical knowledge required to use Recursive duplication** — Copies all subfolders and files Smart placeholders** — Automatically replaces {{NAME}} everywhere Production-ready** — Works immediately after setup Prerequisites Google Drive account with OAuth2 credentials in n8n Google Apps Script deployment (code below) Template folder in Drive using {{NAME}} as placeholder Setup Step 1: Create your template folder 📁 {{NAME}} - Project Files ├── 📁 01. {{NAME}} - Documents ├── 📁 02. {{NAME}} - Assets ├── 📁 03. Deliverables └── 📄 {{NAME}} - Brief.gdoc Step 2: Deploy Apps Script Go to script.google.com Create new project → Paste code below Deploy → New deployment → Web app Execute as: Me | Access: Anyone Copy the deployment URL Step 3: Configure workflow Replace these placeholders: DESTINATION_PARENT_FOLDER_ID — Where new folders are created YOUR_APPS_SCRIPT_URL — URL from Step 2 YOUR_TEMPLATE_FOLDER_ID — Folder to duplicate Step 4: Test Activate workflow → Open form URL → Submit a name → Check Drive! Apps Script Code function doPost(e) { try { var params = e.parameter; var templateFolderId = params.templateFolderId; var name = params.name; var destinationFolderId = params.destinationFolderId; if (!templateFolderId || !name) { return jsonResponse({ success: false, error: 'Missing required parameters: templateFolderId and name are required' }); } var templateFolder = DriveApp.getFolderById(templateFolderId); if (destinationFolderId) { var destinationFolder = DriveApp.getFolderById(destinationFolderId); copyContentsRecursively(templateFolder, destinationFolder, name); return jsonResponse({ success: true, id: destinationFolder.getId(), url: destinationFolder.getUrl(), name: destinationFolder.getName(), mode: 'copied_to_existing', timestamp: new Date().toISOString() }); } else { var parentFolder = templateFolder.getParents().next(); var newFolderName = replacePlaceholders(templateFolder.getName(), name); var newFolder = parentFolder.createFolder(newFolderName); copyContentsRecursively(templateFolder, newFolder, name); return jsonResponse({ success: true, id: newFolder.getId(), url: newFolder.getUrl(), name: newFolder.getName(), mode: 'created_new', timestamp: new Date().toISOString() }); } } catch (error) { return jsonResponse({ success: false, error: error.toString() }); } } function replacePlaceholders(text, name) { var result = text; result = result.replace(/\{\{NAME\}\}/g, name); result = result.replace(/\{\{name\}\}/g, name.toLowerCase()); result = result.replace(/\{\{Name\}\}/g, name); return result; } function copyContentsRecursively(sourceFolder, destinationFolder, name) { var files = sourceFolder.getFiles(); while (files.hasNext()) { try { var file = files.next(); var newFileName = replacePlaceholders(file.getName(), name); file.makeCopy(newFileName, destinationFolder); Utilities.sleep(150); } catch (error) { Logger.log('Error copying file: ' + error.toString()); } } var subfolders = sourceFolder.getFolders(); while (subfolders.hasNext()) { try { var subfolder = subfolders.next(); var newSubfolderName = replacePlaceholders(subfolder.getName(), name); var newSubfolder = destinationFolder.createFolder(newSubfolderName); Utilities.sleep(200); copyContentsRecursively(subfolder, newSubfolder, name); } catch (error) { Logger.log('Error copying subfolder: ' + error.toString()); } } } function jsonResponse(data) { return ContentService .createTextOutput(JSON.stringify(data)) .setMimeType(ContentService.MimeType.JSON); } Use Cases Agencies** — Client folder structure on new signup Freelancers** — Project folders from intake form HR Teams** — Employee onboarding folders Schools** — Student portfolio folders Event Planners** — Event documentation folders Notes Apps Script may take +60 seconds for large structures Timeout is set to 5 minutes for complex templates Your Google account needs edit access to template and destination folders
by Avkash Kakdiya
How it works This workflow automates FTP-to-Google Drive file transfers.It runs on a schedule, retrieves files in batches, downloads them from FTP, and uploads them to Google Drive while keeping original filenames.Batching ensures efficient, smooth processing without overloading the system. Step-by-step 1. Trigger and list files Schedule Trigger** – Starts the workflow at configured intervals. List Files from FTP** – Connects to the FTP server and retrieves a list of files from the target folder. 2. Batch processing setup Batch Files** – Splits files into small batches for sequential processing. 3. File handling Download File from FTP** – Downloads each file from FTP for further processing. 4. Cloud upload Upload to Google Drive** – Uploads the file to Google Drive, retaining its original name for consistency. Why use this? Eliminates manual FTP downloads and Google Drive uploads. Ensures smooth sequential processing with batch handling. Preserves original filenames for clarity and traceability. Runs automatically on a schedule, reducing human intervention. Scales easily to handle large volumes of files efficiently.
by Marco Cassar
Who it’s for? Anyone calling a Google Cloud Run service from n8n who wants a small, reusable auth layer instead of wiring tokens in every workflow. What it does / How it works This sub-workflow checks whether an incoming id_token exists and is still valid (with a 5-minute buffer). If it’s good, it reuses it. If not, it signs a short-lived JWT with your service account, exchanges it at Google’s token endpoint, and returns a fresh id_token. It also passes through service_url and an optional service_path so the caller can hit the endpoint right away. (Designed to be called via Execute Workflow from your main flow.) How to set up Add your JWT (PEM) credential using the service account private_key. In Vars, set client_email (from your key) and confirm token_uri is https://oauth2.googleapis.com/token. Call this sub-workflow with service_url (and optional service_path). Optionally include a prior id_token to enable reuse. Inputs / Outputs Inputs: id_token (optional), service_url, service_path Outputs: id_token, service_url, service_path Notes Built for loops: pair with a Merge/Split strategy to attach id_token to each item. Keep credentials in n8n Credentials (no keys in nodes). Full write-up and context: Build a Secure Google Cloud Run API, Then Call It from n8n (Free Tier) — by Marco Cassar
by Avkash Kakdiya
How it works This workflow automatically scrapes LinkedIn job postings for a list of target companies and organizes the results in Google Sheets. Every Monday morning, it checks your company list, runs a LinkedIn job scrape using Phantombuster, waits for the data to be ready, and then fetches the results. Finally, it formats the job postings into a clean structure and saves them into a results sheet for easy analysis. Step-by-step Start with Scheduled Trigger The workflow runs automatically at 9:00 AM every Monday. It reads your “Companies Sheet” in Google Sheets and filters only those marked with Status = Pending. Scrape LinkedIn Jobs The workflow launches your Phantombuster agent with the LinkedIn profile URLs from the sheet. It waits 3 minutes to let the scraper finish running. Then it fetches the output CSV link containing the job posting results. Format the Data The scraped data is cleaned and structured into fields like: Company Name Job Title Job Description Job Link Date Posted Location Employment Type Save Everything in Google Sheets The formatted job data is appended into your “Job Results” Google Sheet. Each entry includes a scrape date so you can track when the data was collected. Why use this? Automates job market research and competitive hiring analysis. Collects structured job posting data from multiple companies at scale. Saves time by running on a schedule with no manual effort. Keeps all results organized in Google Sheets for easy review and sharing. Helps HR and recruitment teams stay ahead of competitors’ hiring activity.
by Marco Cassar
Who it’s for? Anyone who wants a dead-simple, free-tier friendly way to run custom API logic on Google Cloud Run and call it securely from n8n—no public exposure, no local hosting. What it does Minimal flow: Set → JWT (sign) → HTTP (token exchange) → HTTP (call Cloud Run with Authorization: Bearer <id_token> ). No caching, no extras—just enough to authenticate and hit your endpoint. How to set up General instructions below—see my detailed guide for more info: Build a Secure Google Cloud Run API, Then Call It from n8n (Free Tier) Setup: Create a Cloud Run service and enable Require authentication (Cloud IAM). Create a Google Service Account with Cloud Run Invoker on that service. In n8n, set service_url, client_email, token_uri (https://oauth2.googleapis.com/token) in Set. Create a JWT (PEM) credential from your service account key (paste the full BEGIN/END block). Run the workflow; the second HTTP node calls your Cloud Run URL with the ID token. Requirements Cloud Run service URL (auth required) Google Service Account with Cloud Run Invoker Private key JSON fields downloaded from Service Account | needed to generate JWT credentials More details Full write-up (minimal + modular versions): Build a Secure Google Cloud Run API, Then Call It from n8n (Free Tier)