by Lucas Perret
This workflow will allow you to enrich in real-time a form submission from Webflow using Datagma. Based on the result of this workflow, a specific Calendly link will be shown on the website. If the process outcome is '1', a link for a one-on-one demo will be provided. If the process outcome is '2', a link for a group demo will be shown. Full guide here: Real-time Lead Routing
by kreonovo
What this does: This automation will dynamically create channels on your Discord server for each of your Webflow forms then send formatted form submissions as messages in those channels. This is useful for Webflow will only notify a single email of a form submission. By using this workflow you can enhance your Webflow form management by receiving them in Discord. This is great if you need to notify multiple team members or communities of your form submissions. Usage guide Full written and video guide Simply create credentials for Webflow and Discord and connect them to the nodes. The video guide demonstrates a realworld usecase using a Webflow template and breaks down each node in detail about how it works.
by jason
If you have made some investments in cryptocurrency, this workflow will allow you to create an Airtable base that will update the value of your portfolio every hour. You can then track how well your investments are doing. You can check out my Airtable base to see how it works or even copy my base so that you can customize this workflow for yourself. To implement this workflow, you will need to update the Airtable nodes with your own credentials and make sure that they are pointing to your Airtable
by Yaron Been
Luma Photon Flash Image Generator Description Accelerated variant of Photon prioritizing speed while maintaining quality Overview This n8n workflow integrates with the Replicate API to use the luma/photon-flash model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for image generation Optional Parameters seed** (integer, default: None): Random seed. Set for reproducible generation aspect_ratio** (string, default: 16:9): Aspect ratio of the generated image image_reference** (string, default: None): Reference image to guide generation style_reference** (string, default: None): Style reference image to guide generation character_reference** (string, default: None): Character reference image to guide generation image_reference_url** (string, default: None): Deprecated: Use image_reference instead style_reference_url** (string, default: None): Deprecated: Use style_reference instead image_reference_weight** (number, default: 0.85): Weight of the reference image. Larger values will make the reference image have a stronger influence on the generated image. style_reference_weight** (number, default: 0.85): Weight of the style reference image character_reference_url** (string, default: None): Deprecated: Use character_reference instead How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: luma/photon-flash API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Fire V Sekai.mediapipe Labeler Image Generator Description Mediapipe Blendshape Labeler - Predicts the blend shapes of an image. Overview This n8n workflow integrates with the Replicate API to use the fire/v-sekai.mediapipe-labeler model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters media_path** (string): Input image, video, or training zip file Optional Parameters test_mode** (boolean, default: False): Enable test mode for quick verification max_people** (integer, default: 100): Maximum number of people to detect (1-100) export_train** (boolean, default: True): Export training zip containing json annotations and frame pngs aligned_media** (string, default: None): Optional video that is aligned with the input video's annotations frame_sample_rate** (integer, default: 1): Process every nth frame for video input How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/v-sekai.mediapipe-labeler API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Baptiste Fort
🎯 Workflow Goal Still manually checking form responses in your inbox? What if every submission landed neatly in Airtable — and you got a clean Slack message instantly? That’s exactly what this workflow does. No code, no delay — just a smooth automation to keep your team in the loop: Tally → Airtable → Slack Build an automated flow that: receives Tally form submissions, cleans up the data into usable fields, stores the results in Airtable, and automatically notifies a Slack channel. Step 1 – Connect Tally to n8n What we’re setting up A Webhook node in POST mode. Technical Add a Webhook node. Set it to POST. Copy the generated URL. In Tally → Integrations → Webhooks → paste this URL. Submit a test response on your form to capture a sample structure. Step 2 – Clean the data After connecting Tally, you now receive raw data inside a fields[] array. Let’s convert that into something clean and structured. Goal Extract key info like Full Name, Email, Phone, etc. into simple keys. What we’re doing Add a Set node to remap and clean the fields. Technical Add a Set node right after the Webhook. Add new values (String type) manually: Name: Full Name → Value: {{$json"fields"["value"]}} Name: Email → Value: {{$json"fields"["value"]}} Name: Phone → Value: {{$json"fields"["value"]}} (Adapt the indexes based on your form structure.) Use the data preview in the Webhook node to check the correct order. Output You now get clean data like: { "Full Name": "Jane Doe", "Email": "jane@example.com", "Phone": "+123456789" } Step 3 – Send to Airtable ✅ Once the data is cleaned, let’s store it in Airtable automatically. Goal Create one new Airtable row for each form submission. What we’re setting up An Airtable – Create Record node. Technical Add an Airtable node. Authenticate or connect your API token. Choose the base and table. Map the fields: Name: {{$json["Full Name"]}} Email: {{$json["Email"]}} Phone: {{$json["Phone"]}} Output Each submission creates a clean new row in your Airtable table. Step 4 – Add a delay ⌛ After saving to Airtable, it’s a good idea to insert a short pause — this prevents actions like Slack messages from stacking too fast. Goal Wait a few seconds before sending a Slack notification. What we’re setting up A Wait node for X seconds. ✅ Technical Add a Wait node. Choose Wait for X minutes. Step 5 – Send a message to Slack 💬 Now that the record is stored, let’s send a Slack message to notify your team. Goal Automatically alert your team in Slack when someone fills the form. What we’re setting up A Slack – Send Message node. Technical Add a Slack node. Connect your account. Choose the target channel, like #leads. Use this message format: New lead received! Name: {{$json["Full Name"]}} Email: {{$json["Email"]}} Phone: {{$json["Phone"]}} Output Your Slack team is notified instantly, with all lead info in one clean message. Workflow Complete Your automation now looks like this: Tally → Clean → Airtable → Wait → Slack Every submission turns into clean data, gets saved in Airtable, and alerts your team on Slack — fully automated, no extra work.
by Daniel Ng
Restore All n8n Workflows from Google Drive Backups Restoring multiple n8n workflows manually, especially when migrating your n8n instance to another host or server, can be an incredibly daunting and time-consuming task. Imagine having to individually export and then manually import hundreds of workflows; it's a recipe for errors and significant downtime. This workflow provides a streamlined way to restore all your n8n workflows from backup JSON files stored in a designated Google Drive folder. It's an essential tool for disaster recovery, migrating workflows to a new n8n instance, or recovering from accidental deletions, ideally used in conjunction with a backup solution like our "Auto Backup Workflows To Google Drive" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Who is this for? This template is intended for: n8n Users and Administrators:** Who have previously backed up their n8n workflows as JSON files to Google Drive. Anyone needing to recover their n8n setup:** Whether due to system failure, data corruption, accidental deletions, or during an instance migration. What problem is this workflow solving? / use case Restoring multiple n8n workflows manually can be a slow and error-prone process. This workflow solves that by: Automating Bulk Restore:** Quickly re-imports all workflows from a specified Google Drive backup folder, drastically cutting down on manual effort. Disaster Recovery:** Enables rapid recovery of your automation environment, minimizing downtime after a system failure or data corruption. Simplified Instance Migration:** Makes the process of transferring your entire workflow suite to a new n8n server significantly more manageable and less error-prone compared to manual imports. Data Integrity:** Helps restore workflows to a known good state from your backups, ensuring consistency after a recovery or migration. What this workflow does Manual Trigger: You initiate the workflow manually whenever a restore operation is needed. List Backup Files: The workflow accesses a specific Google Drive folder (which you must configure) and lists all the files within it. It assumes these are your n8n workflow JSON backup files. Iterate and Process: It then loops through each file found in the Google Drive folder: Download Workflow: Downloads the individual workflow JSON file from Google Drive. Extract Content: Parses the downloaded file to extract the JSON data representing the workflow. Import to n8n: Uses the n8n API to create a new workflow (or update an existing one if an ID match is found) in your current n8n instance using the extracted JSON data. Wait Step: Pauses for 3 seconds after attempting to create each workflow to help manage system load and avoid potential API rate-limiting issues. Step-by-step setup Import Template: Upload the provided JSON file into your n8n instance. Configure Credentials: Google Drive Nodes: You will need to create or select existing Google Drive OAuth2 API credentials for these nodes. n8n Node: Configure your n8n API credentials to allow the workflow to create/update workflows in your instance. Specify Google Drive Backup Folder (CRITICAL): Open the "Google Drive Get All Workflows" node. Locate the "Filter" section, and within it, the "Folder ID" parameter. The default value is a placeholder URL. You MUST change this URL to the direct URL of the Google Drive folder that contains your n8n workflow .json backup files. This would typically be one of the hourly folders (e.g., n8n_backup_YYYY-MM-DD_HH) created by the companion backup workflow. Activate Workflow: Although manually triggered, the workflow needs to be active in your n8n instance to be runnable. How to customize this workflow to your needs Selective Restore:** Option 1 (Manual): Before running the workflow, manually move only the specific workflow JSON files you want to restore into the source Google Drive folder configured in the "Google Drive Get All Workflows" node. Option 2 (Automated Filter): Insert an "Edit Fields" or "Filter" node after the "Google Drive Get All Workflows" node to programmatically select which files (e.g., based on filename patterns) should proceed to the "Loop Over Items" node for restoration. Adjust Wait Time:** The "Wait" node is set to 3 seconds. You can increase this if you have a very large number of workflows or if your n8n instance requires more time between API calls. Conversely, for smaller batches on powerful instances, you might decrease it. Error Handling:** For enhanced robustness, consider adding error handling branches (e.g., using "Error Trigger" nodes or "Continue on Fail" settings within nodes) to log or send notifications if a specific workflow fails to import. Important Considerations Workflow Overwriting/Updating:* If a workflow with the same id as one in a backup JSON file already exists in your n8n instance, this restore process will typically *update/overwrite** that existing workflow with the version from the backup. If the id from the backup file does not correspond to any existing workflow, a new workflow will be created. Idempotency:** Running this workflow multiple times on the exact same backup folder will cause the workflow to re-process all files. This means workflows will be updated/overwritten again if they exist, or created if they don't. Ensure this is the intended behavior. Companion Backup Workflow:** This restore workflow is ideally paired with backups created by a process like our "Auto Backup Workflows To Google Drive" template, which saves workflows in the expected JSON format. Test Safely:** It's highly recommended to test this workflow on a non-production or development n8n instance first, especially when restoring a large number of critical workflows or if you're unsure about the overwrite behavior in your specific n8n setup. Source Folder Content:* Ensure the specified Google Drive folder *only contains n8n workflow JSON files that you intend to restore. Other file types may cause errors in the "Extract from File" node.
by Daniel Ng
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Auto Backup n8n Credentials to Google Drive This workflow automates the backup of all your n8n credentials. It can be triggered manually for on-demand backups or will run automatically on a schedule (default to daily execution). It executes a command to export decrypted credentials, formats them into a JSON file, and then uploads this file to a specified Google Drive folder. This process is essential for creating secure backups of your sensitive credential data, facilitating instance recovery or migration. We recommend you use this backup workflow in conjunction with a restore solution like our "Restore Credentials from Google Drive Backups" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Who is this for? This workflow is designed for n8n administrators and users who require a reliable method to back up their n8n credentials. It is particularly beneficial for those managing self-hosted n8n instances, where direct server access allows for command-line operations. What problem is this workflow solving? / use case Managing and backing up n8n credentials manually can be a tedious task, susceptible to errors and often overlooked. This workflow solves the problem by providing an automated, secure, and consistent way to back up all credential data. The primary use case is to ensure that a recovery point for credentials exists, safeguarding against data loss, assisting in instance migrations, or for general disaster recovery preparedness, ideally on a regular, automated basis. What this workflow does The workflow proceeds through the following steps: Triggers: The workflow includes two types of triggers: Manual Trigger: An "On Click Trigger" allows for on-demand execution whenever needed. Scheduled Trigger: A "Schedule Trigger" is included, designed for automated daily backups. Export Credentials: An "Execute Command" node runs the shell command npx n8n export:credentials --all --decrypted. This command exports all credentials from the n8n instance in a decrypted JSON format. Format JSON Data: The output from the command is processed by a "Code" node ("JSON Formatting Data"). This node extracts, parses, and formats the JSON to ensure it is well-structured. Aggregate Credentials: An "Aggregate" node ("Aggregate Cridentials") combines individual credential entries into a single JSON array. Convert to File: The "Convert To File" node transforms the aggregated JSON array into a binary file, preparing it as n8n_backup_credentials.json. Upload to Google Drive: The "Google Drive Upload File" node uploads the generated JSON file to a specified folder in Google Drive. Step-by-step setup To use this workflow, you'll need to configure a few things: n8n Instance Environment: The n8n instance must have access to the npx command and the n8n-cli package. The "Execute Command" node must be able to run shell commands on the server where n8n is hosted. Google Drive Credentials: In the "Google Drive Upload File" node, select or create your Google Drive OAuth2 API credentials. Google Drive Folder ID: Update the folderId parameter in the "Google Drive Upload File" node with the ID of your desired Google Drive folder. File Name (Optional): The backup file will be named n8n_backup_credentials.json. You can customize this in the "Google Drive Upload File" node. Configure Schedule Trigger: The workflow includes a "Schedule Trigger". Review its configuration to ensure it runs daily at your preferred time. How to customize this workflow to your needs Adjust Schedule:** Fine-tune the "Schedule Trigger" for different intervals (e.g., weekly, hourly) or specific days/times as per your requirements. Notifications:** Add notification nodes (e.g., Slack, Email, Discord) after the "Google Drive Upload File" node to receive alerts upon successful backup or in case of failures. Enhanced Error Handling:** Incorporate error handling branches using "Error Trigger" nodes or conditional logic to manage potential failures. Client-Side Encryption (Advanced):* If your security policy requires the backup file itself to be encrypted at rest in Google Drive, you can add a step *before uploading. Insert a "Code" node or use an "Execute Command" node with an encryption utility (like GPG) to encrypt the n8n_backup_credentials.json file. Remember that you would then need a corresponding decryption process. Dynamic File Naming:** Modify the "Google Drive Upload File" node to include a timestamp in the filename (e.g., n8n_backup_credentials_{{$now.toFormat('yyyyMMddHHmmss')}}.json) to keep multiple versions of backups. Important Note on Credential Security To simplify the setup and use of this backup workflow, the exported credentials are stored in the resulting JSON file in a decrypted state. This means the backup file itself is not further encrypted by this workflow. Consequently, it is critically important to: Ensure the Google Drive account used for backups is highly secure (e.g., strong password, two-factor authentication). Restrict access to the Google Drive folder where these backups are stored to only authorized personnel.
by Nicolas Le Gallo
Who is this template for ? Basically anyone involved in recurring recruiting processes and looking to save a considerable amount of time and energy (Talent acquisitions Managers, recruiting consultants, hiring managers, founders…etc) What it does : It takes a messy and raw transcript from an “intake meeting” between a recruiter and a Hiring manager and turns it into a clean and exhaustive brief + scorecard templates for each interview rounds It does it under 1 MINUTE while the usual “manual” process usually takes several hours How to customize this workflow to your needs Google doc is the default choice because it allows easy modification of the output, but you can choose to output this under any format and / or store it wherever you want I strongly suggest to choose one of the latest LLM models for better output quality Both LLM prompts can be revised to match your expectations better
by Yaron Been
Prunaai Flux Schnell Image Generator Description This is a 3x faster FLUX.1 [schnell] model from Black Forest Labs, optimised with pruna with minimal quality loss. Contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux-schnell model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image Optional Parameters seed** (integer, default: None): Random seed. Set for reproducible generation megapixels** (string, default: 1): Approximate number of megapixels for generated image speed_mode** (string, default: Juiced 🔥 (default)): Run faster predictions with model optimized for speed num_outputs** (integer, default: 1): Number of outputs to generate aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Format of the output images output_quality** (integer, default: 80): Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs num_inference_steps** (integer, default: 4): Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux-schnell API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Prunaai Flux.1 Dev Image Generator Description This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux.1-dev model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Seed guidance** (number, default: 3.5): Guidance scale image_size** (integer, default: 1024): Base image size (longest side) speed_mode** (string, default: Juiced 🔥 (default)): Speed optimization level aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Output format output_quality** (integer, default: 80): Output quality (for jpg and webp) num_inference_steps** (integer, default: 28): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux.1-dev API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by MRJ
Modular Hazard Analysis Workflow : Free Version Business Value Proposition Accelerates ISO 26262 compliance for automotive/industrial systems by automating safety analysis while maintaining rigorous audit standards. :chart_with_upwards_trend: Key Benefits Time Instant report generation vs. weeks of documentation for HAZOP Risk Mitigation Pre-validated templates reduce human error Quick guide Input a systems_description file to the workflow Provide an OPENAI_API_KEY to the chat model. You can also replace the chat model with the model of your interest. :play_or_pause_button: Running the Workflow Refer to the github repo to understand in detail about how the workflow can be used :email: Contact For collaboration proposals or security issues, contact me by Email. :warning: Validation & Limitations AI-Assisted Analysis Considerations | Advantage | Mitigation Strategy | Implementation Example | |-----------|---------------------|------------------------| | Rapid hazard identification | Human validation layer | Manual review nodes in workflow | | Consistent S/E/C scoring | Rule-based validation | ASIL-D → Redundancy check | | Edge case coverage | Cross-reference with historical data | Integration with incident databases |