by Yaron Been
Luma Photon Flash Image Generator Description Accelerated variant of Photon prioritizing speed while maintaining quality Overview This n8n workflow integrates with the Replicate API to use the luma/photon-flash model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for image generation Optional Parameters seed** (integer, default: None): Random seed. Set for reproducible generation aspect_ratio** (string, default: 16:9): Aspect ratio of the generated image image_reference** (string, default: None): Reference image to guide generation style_reference** (string, default: None): Style reference image to guide generation character_reference** (string, default: None): Character reference image to guide generation image_reference_url** (string, default: None): Deprecated: Use image_reference instead style_reference_url** (string, default: None): Deprecated: Use style_reference instead image_reference_weight** (number, default: 0.85): Weight of the reference image. Larger values will make the reference image have a stronger influence on the generated image. style_reference_weight** (number, default: 0.85): Weight of the style reference image character_reference_url** (string, default: None): Deprecated: Use character_reference instead How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: luma/photon-flash API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Yaron Been
Fire V Sekai.mediapipe Labeler Image Generator Description Mediapipe Blendshape Labeler - Predicts the blend shapes of an image. Overview This n8n workflow integrates with the Replicate API to use the fire/v-sekai.mediapipe-labeler model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters media_path** (string): Input image, video, or training zip file Optional Parameters test_mode** (boolean, default: False): Enable test mode for quick verification max_people** (integer, default: 100): Maximum number of people to detect (1-100) export_train** (boolean, default: True): Export training zip containing json annotations and frame pngs aligned_media** (string, default: None): Optional video that is aligned with the input video's annotations frame_sample_rate** (integer, default: 1): Process every nth frame for video input How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/v-sekai.mediapipe-labeler API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Baptiste Fort
🎯 Workflow Goal Still manually checking form responses in your inbox? What if every submission landed neatly in Airtable — and you got a clean Slack message instantly? That’s exactly what this workflow does. No code, no delay — just a smooth automation to keep your team in the loop: Tally → Airtable → Slack Build an automated flow that: receives Tally form submissions, cleans up the data into usable fields, stores the results in Airtable, and automatically notifies a Slack channel. Step 1 – Connect Tally to n8n What we’re setting up A Webhook node in POST mode. Technical Add a Webhook node. Set it to POST. Copy the generated URL. In Tally → Integrations → Webhooks → paste this URL. Submit a test response on your form to capture a sample structure. Step 2 – Clean the data After connecting Tally, you now receive raw data inside a fields[] array. Let’s convert that into something clean and structured. Goal Extract key info like Full Name, Email, Phone, etc. into simple keys. What we’re doing Add a Set node to remap and clean the fields. Technical Add a Set node right after the Webhook. Add new values (String type) manually: Name: Full Name → Value: {{$json"fields"["value"]}} Name: Email → Value: {{$json"fields"["value"]}} Name: Phone → Value: {{$json"fields"["value"]}} (Adapt the indexes based on your form structure.) Use the data preview in the Webhook node to check the correct order. Output You now get clean data like: { "Full Name": "Jane Doe", "Email": "jane@example.com", "Phone": "+123456789" } Step 3 – Send to Airtable ✅ Once the data is cleaned, let’s store it in Airtable automatically. Goal Create one new Airtable row for each form submission. What we’re setting up An Airtable – Create Record node. Technical Add an Airtable node. Authenticate or connect your API token. Choose the base and table. Map the fields: Name: {{$json["Full Name"]}} Email: {{$json["Email"]}} Phone: {{$json["Phone"]}} Output Each submission creates a clean new row in your Airtable table. Step 4 – Add a delay ⌛ After saving to Airtable, it’s a good idea to insert a short pause — this prevents actions like Slack messages from stacking too fast. Goal Wait a few seconds before sending a Slack notification. What we’re setting up A Wait node for X seconds. ✅ Technical Add a Wait node. Choose Wait for X minutes. Step 5 – Send a message to Slack 💬 Now that the record is stored, let’s send a Slack message to notify your team. Goal Automatically alert your team in Slack when someone fills the form. What we’re setting up A Slack – Send Message node. Technical Add a Slack node. Connect your account. Choose the target channel, like #leads. Use this message format: New lead received! Name: {{$json["Full Name"]}} Email: {{$json["Email"]}} Phone: {{$json["Phone"]}} Output Your Slack team is notified instantly, with all lead info in one clean message. Workflow Complete Your automation now looks like this: Tally → Clean → Airtable → Wait → Slack Every submission turns into clean data, gets saved in Airtable, and alerts your team on Slack — fully automated, no extra work.
by Paul Taylor
📩 Gmail → GPT → Supabase | Task Extractor This n8n workflow automates the extraction of actionable tasks from unread Gmail messages using OpenAI's GPT API, stores the resulting task metadata in Supabase, and avoids re-processing previously handled emails. ✅ What It Does Triggers on a schedule to check for unread emails in your Gmail inbox. Loops through each email individually using SplitInBatches. Checks Supabase to see if the email has already been processed. If it's a new email: Formats the email content into a structured GPT prompt Calls ChatGPT-4o to extract structured task data Inserts the result into your emails table in Supabase 🧰 Prerequisites Before using this workflow, you must have: An active n8n Cloud or self-hosted instance A connected Gmail account with OAuth credentials in n8n A Supabase project with an emails table and: ALTER TABLE emails ADD CONSTRAINT unique_email_id UNIQUE (email_id); An OpenAI API key with access to GPT-4o or GPT-3.5-turbo 🔐 Required Credentials | Name | Type | Description | |-----------------|------------|-----------------------------------| | Gmail OAuth | Gmail | To pull unread messages | | OpenAI API Key | OpenAI | To generate task summaries | | Supabase API | HTTP | For inserting rows via REST API | 🔁 Environment Variables or Replacements Supabase_TaskManagement_URI → e.g., https://your-project.supabase.co Supabase_TaskManagement_ANON_KEY → Your Supabase anon key These are used in the HTTP request to Supabase. ⏰ Scheduling / Trigger Triggered using a Schedule node Default: every X minutes (adjust to your preference) Uses a Gmail API filter: unread emails with label = INBOX 🧠 Intended Use Case > Designed for productivity-minded professionals who want to extract, summarize, and store actionable tasks from incoming email — without processing the same email twice or wasting GPT API credits. This is part of a larger system integrating GPT, calendar scheduling, and optional task platforms (like ClickUp). 📦 Output (Stored in Supabase) Each processed email includes: email_id subject sender received_at body (email snippet) gpt_summary (structured task) requires_deep_work (from GPT logic) deleted (initially false)
by Daniel Ng
Restore All n8n Workflows from Google Drive Backups Restoring multiple n8n workflows manually, especially when migrating your n8n instance to another host or server, can be an incredibly daunting and time-consuming task. Imagine having to individually export and then manually import hundreds of workflows; it's a recipe for errors and significant downtime. This workflow provides a streamlined way to restore all your n8n workflows from backup JSON files stored in a designated Google Drive folder. It's an essential tool for disaster recovery, migrating workflows to a new n8n instance, or recovering from accidental deletions, ideally used in conjunction with a backup solution like our "Auto Backup Workflows To Google Drive" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Who is this for? This template is intended for: n8n Users and Administrators:** Who have previously backed up their n8n workflows as JSON files to Google Drive. Anyone needing to recover their n8n setup:** Whether due to system failure, data corruption, accidental deletions, or during an instance migration. What problem is this workflow solving? / use case Restoring multiple n8n workflows manually can be a slow and error-prone process. This workflow solves that by: Automating Bulk Restore:** Quickly re-imports all workflows from a specified Google Drive backup folder, drastically cutting down on manual effort. Disaster Recovery:** Enables rapid recovery of your automation environment, minimizing downtime after a system failure or data corruption. Simplified Instance Migration:** Makes the process of transferring your entire workflow suite to a new n8n server significantly more manageable and less error-prone compared to manual imports. Data Integrity:** Helps restore workflows to a known good state from your backups, ensuring consistency after a recovery or migration. What this workflow does Manual Trigger: You initiate the workflow manually whenever a restore operation is needed. List Backup Files: The workflow accesses a specific Google Drive folder (which you must configure) and lists all the files within it. It assumes these are your n8n workflow JSON backup files. Iterate and Process: It then loops through each file found in the Google Drive folder: Download Workflow: Downloads the individual workflow JSON file from Google Drive. Extract Content: Parses the downloaded file to extract the JSON data representing the workflow. Import to n8n: Uses the n8n API to create a new workflow (or update an existing one if an ID match is found) in your current n8n instance using the extracted JSON data. Wait Step: Pauses for 3 seconds after attempting to create each workflow to help manage system load and avoid potential API rate-limiting issues. Step-by-step setup Import Template: Upload the provided JSON file into your n8n instance. Configure Credentials: Google Drive Nodes: You will need to create or select existing Google Drive OAuth2 API credentials for these nodes. n8n Node: Configure your n8n API credentials to allow the workflow to create/update workflows in your instance. Specify Google Drive Backup Folder (CRITICAL): Open the "Google Drive Get All Workflows" node. Locate the "Filter" section, and within it, the "Folder ID" parameter. The default value is a placeholder URL. You MUST change this URL to the direct URL of the Google Drive folder that contains your n8n workflow .json backup files. This would typically be one of the hourly folders (e.g., n8n_backup_YYYY-MM-DD_HH) created by the companion backup workflow. Activate Workflow: Although manually triggered, the workflow needs to be active in your n8n instance to be runnable. How to customize this workflow to your needs Selective Restore:** Option 1 (Manual): Before running the workflow, manually move only the specific workflow JSON files you want to restore into the source Google Drive folder configured in the "Google Drive Get All Workflows" node. Option 2 (Automated Filter): Insert an "Edit Fields" or "Filter" node after the "Google Drive Get All Workflows" node to programmatically select which files (e.g., based on filename patterns) should proceed to the "Loop Over Items" node for restoration. Adjust Wait Time:** The "Wait" node is set to 3 seconds. You can increase this if you have a very large number of workflows or if your n8n instance requires more time between API calls. Conversely, for smaller batches on powerful instances, you might decrease it. Error Handling:** For enhanced robustness, consider adding error handling branches (e.g., using "Error Trigger" nodes or "Continue on Fail" settings within nodes) to log or send notifications if a specific workflow fails to import. Important Considerations Workflow Overwriting/Updating:* If a workflow with the same id as one in a backup JSON file already exists in your n8n instance, this restore process will typically *update/overwrite** that existing workflow with the version from the backup. If the id from the backup file does not correspond to any existing workflow, a new workflow will be created. Idempotency:** Running this workflow multiple times on the exact same backup folder will cause the workflow to re-process all files. This means workflows will be updated/overwritten again if they exist, or created if they don't. Ensure this is the intended behavior. Companion Backup Workflow:** This restore workflow is ideally paired with backups created by a process like our "Auto Backup Workflows To Google Drive" template, which saves workflows in the expected JSON format. Test Safely:** It's highly recommended to test this workflow on a non-production or development n8n instance first, especially when restoring a large number of critical workflows or if you're unsure about the overwrite behavior in your specific n8n setup. Source Folder Content:* Ensure the specified Google Drive folder *only contains n8n workflow JSON files that you intend to restore. Other file types may cause errors in the "Extract from File" node.
by Fenngbrotalk
n8n Workflow: AI-Powered Stock Chart Analysis Bot for Telegram This is a powerful n8n automation workflow that integrates a Telegram bot with OpenAI's multimodal large language model (GPT-4 Vision) to provide users with real-time stock chart analysis. Workflow Breakdown Receive Image:** The workflow is initiated by a Telegram Trigger. It activates whenever a user sends an image (e.g., a stock's candlestick chart) to a designated Telegram chat, automatically downloading the file. Image Pre-processing:** To optimize the AI's performance and efficiency, the Edit Image node resizes the incoming image to a standard 512x512 pixel format. AI Vision Analysis:** The processed image is then passed to a LangChain Chain, which utilizes the OpenAI GPT-4 Vision model. A sophisticated system prompt instructs the AI to act as a professional stock analyst. Intelligent Interpretation:** The AI analyzes the image to identify the stock's name, price trend (uptrend, downtrend, or sideways), key support/resistance levels, and volume changes. It then generates a comprehensive analysis report combining technical indicators and market sentiment. Structured Output:** To ensure reliability and consistency, the AI's output is parsed into a specific JSON format. This structure includes a search_word (for the industry/sector) and the main content (the analysis text). Send Response:** Finally, the workflow extracts the content field from the JSON output and uses the Telegram node to send this professional analysis back to the user as a text message in the same chat. Key Features User-Friendly:** Users simply send an image to get an analysis, requiring no complex commands. Instant & Efficient:** The entire analysis and response process is fully automated and completed within moments. Professional-Grade Analysis:** Leverages the advanced image recognition and reasoning capabilities of GPT-4 Vision to deliver insights comparable to those of a human analyst. Reliable & Consistent:** The use of structured output ensures that the format of the response is always consistent and easy to read or process further.
by isa024787bel
This n8n workflow automates sending out SMS notifications via Vonage which includes new tech-related vocabulary everyday. To build this handy vocabulary improver, you’ll need the following: n8n – You can find details on how to install n8n on the Quickstart page. LingvaNex account – You can create a free account here. Up to 200,000 characters are included in the free plan when you generate your API key. Airtable account – You can register for free. Vonage account – You can sign up free of charge if you aren’t already.
by Mohan Gopal
🏖️ AI-Based Tour Itineraries via Email Using OpenAI & Pinecone Vector Search Overview This workflow automates the process of handling new tour package requests received via email, analyzes the request, and provides personalized tour package recommendations using AI and a vector database. It’s designed to streamline customer interactions and deliver quick, relevant responses. Precondition Create a Embedded Tour Package Database (refer to the link below): Pinecone Database setup Register and create API Keys for OpenAI, Pinecone Database. Copy Mail Credentials to access Email Inbox from n8n node This workflow automates the process of extracting tour information from PDF files stored in a Google Drive folder, processes and vectorizes the extracted data, and stores it in a Pinecone vector database for efficient querying. This is especially useful for building AI-powered search or recommendation systems for travel packages. 🛠️ Tools & Nodes Used Email Trigger (IMAP): Monitors the inbox for new tour package requests. Text Classifier: Categorizes incoming emails (e.g., New Request, Follow-up, Other). Code Node: Extracts and structures relevant data from the email (subject, sender, content, etc.). Tour Recommendation AI Agent: An AI agent that interprets the request and formulates a prompt for package recommendations. OpenAI & OpenRouter Chat Models: Used for natural language understanding and generating responses. Simple Memory: Maintains context for ongoing conversations. Pinecone Vector Store: Stores and retrieves tour packages using semantic search. Embeddings (OpenAI): Converts text data into vector embeddings for similarity search. Answer Questions with a Vector Store: Retrieves the most relevant packages from Pinecone. Send Email: Sends the AI-generated recommendations back to the customer. 🔄 Process & Flow Email Reception: The workflow starts with the Email Trigger (IMAP) node, which listens for new emails in the inbox. Classification: The Text Classifier node determines if the email is a new tour package request. Data Extraction: The Code node parses the email, extracting key details like sender, subject, and content. AI Agent Processing: The Tour Recommendation AI Agent receives the structured request and crafts a prompt for package recommendations. Vector Search: The agent queries the Pinecone Vector Store, which holds previously created tour packages, using OpenAI embeddings for semantic matching. Recommendation Generation: The AI agent selects the top 3 most relevant packages and generates a friendly, personalized response. Response Delivery: The Send Email node sends the recommendations back to the customer. 🚀 Recommendations & Improvements for Next Version Error Handling: Add error handling nodes to manage failed email parsing or AI response issues. Logging & Analytics: Integrate logging to track requests, recommendations, and customer responses for continuous improvement. Personalization: Enhance the AI agent to consider customer history or preferences for even more tailored recommendations. Multi-language Support: Add language detection and translation for international customers. Feedback Loop: Include a mechanism for customers to rate recommendations, feeding this data back into the system for improved future suggestions. Attachment Handling: Enable the workflow to process attachments (e.g., customer itineraries or preferences). Scalability: Consider batching or queueing requests if email volume increases. 💡 Conclusion This workflow demonstrates how n8n, combined with AI and vector databases, can automate and personalize customer service in the travel industry. With a few enhancements, it can become even more robust and customer-centric!
by scrapeless official
> ⚠️ Disclaimer: This workflow uses Scrapeless and Claude AI via community nodes, which require n8n self-hosted to work properly. 🔁 How It Works This intelligent B2B lead generation workflow combines search automation, website crawling, AI analysis, and multi-channel output: It starts by using Scrapeless’s Deep Serp API to find company websites from targeted Google Search queries. Each result is then individually crawled using Scrapeless's Crawler module, retrieving key business information from pages like /about, /contact, /services. The raw web content is processed via a Code node to clean, extract, and prepare structured data. The cleaned data is passed to Claude Sonnet (Anthropic) which analyzes and qualifies the lead based on content richness, contact data, and relevance. A filter step ensures only high-quality leads (e.g. lead score ≥ 6) are kept. Sent via Discord webhook for real-time notification (can be replaced with Slack, email, or CRM tools). > 📌 The result is a fully automated system that finds, qualifies, and organizes B2B leads with high efficiency and minimal manual input. ✅ Pre-Conditions Before using this workflow, make sure you have: An n8n self-hosted instance A Scrapeless account and API key (get it here) An Anthropic Claude API key A configured Discord webhook URL (or alternative notification service) ⚙️ Workflow Overview Manual Trigger → Scrapeless Google Search → Item Lists → Scrapeless Crawler → Code (Data Cleaning) → Claude Sonnet → Code (Response Parser) → Filter → Discord Notification 🔨 Step-by-Step Breakdown Manual Trigger – For testing purposes (can be replaced with Cron or Webhook) Scrapeless Google Search – Queries target B2B topics via Scrapeless’s Deep SERP API Item Lists – Splits search results into individual items Scrapeless Crawler – Visits each company domain and scrapes structured content Code Node (Data Cleaner) – Extracts and formats content for LLM input Claude Sonnet (via HTTP Request) – Evaluates lead quality, relevance, and contact info Code Node (Parser) – Parses Claude’s JSON response IF Filter – Filters leads based on score threshold Discord Webhook – Sends formatted message with company info 🧩 Customization Guidance You can easily adjust the workflow to match your needs: Lead Criteria**: Modify the Claude prompt and scoring logic in the Code node Output Channels**: Replace the Discord webhook with Slack, Email, Airtable, or any CRM node Search Topics**: Change your query in the Scrapeless SERP node to find leads in different niches or countries Scoring Threshold**: Adjust the filter logic (lead_score >= 6) to match your quality tolerance 🧪 How to Use Insert your Scrapeless and Claude API credentials in the designated nodes Replace or configure the Discord webhook (or alternative outputs) Run the workflow manually (or schedule it) View qualified leads directly in your chosen notification channel 📦 Output Example Each qualified lead includes: 🏢 Company Name 🌐 Website ✉️ Email(s) 📞 Phone(s) 📍 Location 📈 Lead Score 📝 Summary of relevant content 👥 Ideal Users This workflow is perfect for: AI SaaS companies** targeting mid-market & enterprise leads Marketing agencies** looking for B2B-qualified leads Automation consultants** building scraping solutions No-code developers** working with n8n, Make, Pipedream Sales teams** needing enriched prospecting data
by Yaron Been
Prunaai Flux.1 Dev Image Generator Description This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai Overview This n8n workflow integrates with the Replicate API to use the prunaai/flux.1-dev model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt Optional Parameters seed** (integer, default: -1): Seed guidance** (number, default: 3.5): Guidance scale image_size** (integer, default: 1024): Base image size (longest side) speed_mode** (string, default: Juiced 🔥 (default)): Speed optimization level aspect_ratio** (string, default: 1:1): Aspect ratio of the output image output_format** (string, default: jpg): Output format output_quality** (integer, default: 80): Output quality (for jpg and webp) num_inference_steps** (integer, default: 28): Number of inference steps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: prunaai/flux.1-dev API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Nícolas Pastorello
What is this? This is an n8n workflow designed to supercharge your Sonarr setup. Instead of just waiting for releases to appear in your RSS feed, this workflow proactively runs on a schedule, finds what's missing, actively searches for it, and grabs the best result based on your specific criteria. It's a "set it and forget it" solution to ensure your library is always complete. Key Features 🚀 Proactive Searching: Doesn't wait for content to come to you. It actively triggers a search for missing episodes. 🗓️ Fully Automated & Scheduled: Runs every 12 hours by default to check for anything new that's missing. 🧠 Smart & Efficient: Searches only once per season, even if multiple episodes from that season are missing, preventing unnecessary API calls. 🎯 Precise Release Filtering: It validates search results against the exact quality name and language you define before telling Sonarr to grab it. This gives you more control than standard quality profiles. ✅ Automatic Download: Once a valid release is found, it's automatically pushed to your download client via Sonarr. How It Works Trigger: The workflow starts automatically on a schedule. Fetch Missing: It connects to your Sonarr instance and gets a list of all monitored, "wanted" episodes. Filter & Group: It intelligently creates a unique list of seasons that need searching. Search: It loops through each unique season and tells Sonarr to perform an interactive search. Validate: It inspects the search results and only allows releases that match both the pre-defined quality AND language. Grab: If a perfect match is found, it sends a final command to Sonarr to grab that specific release and begin the download. How to Use This Template Import the JSON file into your n8n instance. Find the node named "info" (it's a "Set" node near the beginning). This is your main configuration area. Update the following values in the "info" node: urlSonar: Change http://192.168.31.204:8989 to your Sonarr's URL. apikey: Paste your Sonarr API key here. quality: Set the exact quality name you want to match (e.g., WEBDL-1080p). languages: Set the exact language name you want to match (e.g., English, Spanish). Activate the workflow. That's it! You can also change the schedule by editing the "Schedule Trigger" node.
by Tom
This is a workflow that might come handy after using loops. They usually leave you with items spread across different "runs". The Code node in this example workflow merges them into a single run, so you have a single list of items which is often easier to work with. Simply adjust the node name inside the Code node as needed. The idea is based on this older workflow template.