by Yang
What this workflow does This workflow automatically turns new technical video uploads into short, engaging Facebook post drafts—complete with a suggested image—and saves the results to Google Sheets for quick review or publishing. It’s designed to help you repurpose tutorial or demo videos into ready-to-use social content without any manual writing or design effort. What problem is this workflow solving? Manually writing Facebook posts for every new tutorial or product video takes time, especially when you want them to be engaging and consistent. This workflow solves that by using AI to watch for new videos, extract meaningful insights, and write posts and create visuals automatically—saving hours of work. Who is this for? This workflow is ideal for: Content creators uploading tutorial videos Marketing teams working with how-to or product videos Agencies and automation pros building scalable social workflows for clients How it works Trigger: Starts when a new video is uploaded to a specific Google Drive folder. Download & Convert: Downloads the video and converts it to base64. Extract Insights: Dumpling AI analyzes the video and extracts structured insights such as topic, tools mentioned, and key steps. Generate Post: GPT-4o creates a short, friendly Facebook post using those insights, along with an image prompt. Create Visual: Dumpling AI generates an image using the prompt. Save to Sheet: The Facebook post and image URL are saved to a Google Sheet. Setup Create a Google Sheet to store the posts and images. Connect your Google Drive, Google Sheets, Dumpling AI, and OpenAI credentials in n8n. Update the workflow with: Your Google Drive folder ID Your target Google Sheet ID (Optional) Edit the prompt used in the GPT node if you want a different tone, style, or structure for the post. How to customize the workflow Change the platform**: Replace “Facebook” in the prompt with LinkedIn, Instagram, or another platform. Use a different image tool**: You can swap Dumpling AI for any other image generation API (e.g. DALL·E, Midjourney via webhook). Add auto-publishing**: Add a Facebook or social media module to publish the generated post directly instead of just saving to Google Sheets. Tag videos by content type**: Use AI to classify videos into categories and store them in separate tabs or sheets.
by LukaszB
This workflow is designed for freelancers, solopreneurs, and business owners who receive a high volume of irrelevant messages in their Gmail inbox — from cold offers to spammy promotions — and want to automatically filter and delete them using AI. Its main purpose is to scan new emails with the help of OpenAI, classify their content, and automatically delete those considered marketing (OFFER) or junk (SPAM). The result is a cleaner inbox without the need to manually sift through low-value messages. The classification logic uses a detailed system prompt with practical examples, so even complex or borderline messages are categorized accurately. Important emails — such as payment confirmations, shipping updates, or genuine business inquiries — remain untouched. This helps maintain a professional inbox with only valuable and relevant communication. The entire process runs automatically in the background and can be customized further — for example, to archive instead of delete, or log deleted emails for review. How it works When triggered (every hour), the workflow fetches new Gmail messages using the Gmail Trigger node. Each message is passed to an AI classifier powered by OpenAI, which reads the message body (email snippet) and returns one of three labels: SPAM: Obvious junk messages, scams, or low-effort bulk messages OFFER: Cold outreach, discount promotions, cart reminders, or generic advertising IMPORTANT: Valuable information for the user, even if commercial (e.g., invoices, order updates, personal inquiries) The workflow then routes the result through an IF node. If the message is marked as SPAM or OFFER, it is immediately deleted from Gmail via the Gmail Delete node. Emails marked as IMPORTANT are ignored and remain in the inbox. The classification is entirely AI-driven based on message content — sender address, headers, or metadata are not used. How to set up To get started, simply connect two credentials: A Gmail account using OAuth2 (via the Gmail Trigger and Gmail Delete nodes) An OpenAI API key (used by the AI classifier node) No advanced setup is needed beyond these two connections. Optionally, you can review or modify the system prompt used for classification — it’s available inside the workflow’s LangChain AI Agent node. The prompt is in English, so it’s recommended to use this workflow with English-language emails for best results. By default, the workflow deletes matching emails immediately. If you prefer safer testing, you can modify the Gmail node to archive, label, or log emails instead of deleting them. The full workflow takes around 5–10 minutes to configure and includes a sticky note with additional instructions and warnings.
by Gerald Denor
AI-Powered Proposal Generator - Sales Automation Workflow Overview This n8n workflow automates the entire proposal generation process using AI, transforming client requirements into professional, customized proposals delivered via email in seconds. Use Case Perfect for agencies, consultants, and sales teams who need to generate high-quality proposals quickly. Instead of spending hours writing proposals manually, this workflow captures client information through a web form and uses GPT-4 to generate contextually relevant, professional proposals. How It Works Form Trigger - Captures client information through a customizable web form OpenAI Integration - Processes form data and generates structured proposal content Google Drive - Creates a copy of your proposal template Google Slides - Populates the template with AI-generated content Gmail - Automatically sends the completed proposal to the client Key Features AI Content Generation**: Uses GPT-4 to create personalized proposal content Professional Templates**: Integrates with Google Slides for polished presentations Automated Delivery**: Sends proposals directly to clients via email Form Integration**: Captures all necessary client data through web forms Customizable Output**: Generates structured proposals with multiple sections Template Sections Generated Proposal title and description Problem summary analysis Three-part solution breakdown Project scope details Milestone timeline with dates Cost integration Requirements n8n instance** (cloud or self-hosted) OpenAI API key** for content generation Google Workspace account** for Slides and Gmail Basic n8n knowledge** for setup and customization Setup Complexity Intermediate - Requires API credentials setup and basic workflow customization Benefits Time Savings**: Reduces proposal creation from hours to minutes Consistency**: Ensures all proposals follow the same professional structure Personalization**: AI analyzes client needs for relevant content Automation**: Eliminates manual copy-paste and formatting work Scalability**: Handle multiple proposal requests simultaneously Customization Options Modify AI prompts for different industries or services Customize Google Slides template design Adjust form fields for specific information needs Personalize email templates and signatures Configure milestone templates for different project types Error Handling Includes basic error handling for API failures and form validation to ensure reliable operation. Security Notes All credentials have been removed from this template. Users must configure their own: OpenAI API credentials Google OAuth2 connections for Slides, Drive, and Gmail Form webhook configuration This workflow demonstrates practical AI integration in business processes and showcases n8n's capabilities for complex automation scenarios.
by Obsidi8n
I am submitting this workflow for the Obsidian community to showcase the potential of integrating Obsidian with n8n. While straightforward, it serves as a compelling demonstration of the potential unlocked by integrating Obsidian with n8n. How it works This workflow lets you retrieve specific Airtable data you need in seconds, directly within your Obsidian note, using n8n. By highlighting a question in Obsidian and sending it to a webhook via the Post Webhook Plugin, you can fetch specific data from your Airtable base and instantly insert the response back into your note. The workflow leverages OpenAI’s GPT model to interpret your query, extract relevant data from Airtable, and format the result for seamless integration into your note. Set up steps Install the Post Webhook Plugin: Add this plugin to your Obsidian vault from the plugin store or GitHub. Set up the n8n Webhook: Copy the webhook URL generated in this workflow and insert it into the Post Webhook Plugin's settings in Obsidian. Configure Airtable Access: Link your Airtable account and specify the desired base and table to pull data from. Test the Workflow: Highlight a question in your Obsidian note, use the “Send Selection to Webhook” command, and verify that data is returned as expected.
by Daniel Nolde
What it is: In version 1.78, n8n introduced a dedicated node to use the OpenRouter service, which lets you to use a lot of different LLM models and providers and change models on the fly in an agentic workflow. For prior n8n versions, there's a workaround to make OpenRouter accessible, by using the OpenAI node with a OpenRouter-specific BaseURL. This trivial workflow demonstrates this for version before 1.78, so that you can use different LLM model dynamically with the available n8n nodes for OpenAI LLM and OpenAI credentials. What you can do: Use any of the OpenRouter models Have the model even dynamically configured or changing (by some external config, some rule, or some specific chat message) Setup steps: Import the workflow Ensure you have registered and account, purchased some credits and created and API key for OpenRouter.ai Configure the "OpenRouter" credentials with your own credentials, using an OpenAI type credential, but making sure in the credential's config form its "Base URL" is set to https://openrouter.ai/api/v1 so OpenRouter is used instead of OpenAI. Open the "Settings" node and change the model value to any valid model id from the OpenRouter models list or even have the model property set dynamically
by Monospace Design
What is this workflow doing? This simple workflow is pulling the latest Euro foreign exchange reference rates from the European Central Bank and responding expected values to an incoming HTTP request (GET) via a Webhook trigger node. Setup no authentication** needed the workflow is ready to use test** the workflow template by hitting the test workflow button and calling the URL in the webhook node optional: choose your own Webhook listening path in the Webhook trigger node Usage There are two possible usage scenarios: get all Euro exchange rates as an array of objects get only a specific currency exchange rate as a single object All available rates Using the HTTP query ?foreign=USD (where USD is one of the available currency symbols) will provide only that specificly asked rate. Response example: {"currency":"USD","rate":"1.0852"} Single exchange rate If no query is provided, all available rates are returned. Response example: [{"currency":"USD","rate":"1.0852"},{"currency":"JPY","rate":"163.38"},{"currency":"BGN","rate":"1.9558"},{"currency":"CZK","rate":"25.367"},{"currency":"DKK","rate":"7.4542"},{"currency":"GBP","rate":"0.85495"},{"currency":"HUF","rate":"389.53"},{"currency":"PLN","rate":"4.3053"},{"currency":"RON","rate":"4.9722"},{"currency":"SEK","rate":"11.1675"},{"currency":"CHF","rate":"0.9546"},{"currency":"ISK","rate":"149.30"},{"currency":"NOK","rate":"11.4285"},{"currency":"TRY","rate":"33.7742"},{"currency":"AUD","rate":"1.6560"},{"currency":"BRL","rate":"5.4111"},{"currency":"CAD","rate":"1.4674"},{"currency":"CNY","rate":"7.8100"},{"currency":"HKD","rate":"8.4898"},{"currency":"IDR","rate":"16962.54"},{"currency":"ILS","rate":"3.9603"},{"currency":"INR","rate":"89.9375"},{"currency":"KRW","rate":"1444.46"},{"currency":"MXN","rate":"18.5473"},{"currency":"MYR","rate":"5.1840"},{"currency":"NZD","rate":"1.7560"},{"currency":"PHP","rate":"60.874"},{"currency":"SGD","rate":"1.4582"},{"currency":"THB","rate":"38.915"},{"currency":"ZAR","rate":"20.9499"}] Further info Read more about Euro foreign exchange reference rates here.
by Khaled
🌐 Web Server Monitor & Alert System This automation pings web servers at regular intervals, logs their status, and sends email alerts if a server goes down. It’s perfect for maintaining visibility over server uptime — without complex monitoring tools. 🧠 How It Works This workflow performs minute-by-minute checks on all listed servers in a Google Sheet and: ✅ Logs all reachable servers in an “Alive” log. 🔻 Sends an email alert if a server is unreachable. 📄 Logs failed servers in a “Down” sheet with timestamps. 🧩 Key Components ⏰ 1. Schedule Trigger Runs the workflow every minute for real-time monitoring. 📄 2. Web Servers List (Google Sheets) Pulls server IPs or hostnames from a Google Sheet named Server_List. Each row = one server to monitor. This makes adding/removing servers effortless — just update the sheet. 🌐 3. Servers Alive Check (HTTP Request) Performs an HTTP GET request to each server (e.g., http://your-server.com). If the request fails, it automatically triggers the error path (handled via continueOnFail). ✅ 4. Web Server Alive Log (Google Sheets) Records successful pings in Server_Status_Alive with: Timestamp Server IP Status = Alive This log can be used for uptime reports or audits. 📧 5. Server Down Notification (Gmail) If a server fails, this node sends an email to the admin. It includes: Server address Timestamp Suggested action 📄 6. Web Server Down Log (Google Sheets) Logs failed pings in a separate sheet for historical tracking and debugging. ✅ Main Advantages Live Server Monitoring Stay informed about server health in near real-time. No-Code Configuration Add/remove servers from the Google Sheet — no need to touch the workflow. Email Alerts on Failure Proactively notifies you before users report the issue. Audit-Ready Logging Maintains logs for both healthy and failed checks for documentation or reporting. Flexible & Scalable Monitor 1 or 100 servers with the same template — just scale the list. ⚙️ Setup Steps 🔑 Prerequisites Google Sheet with server list (column name = “Server”) Gmail OAuth2 Connection for alerts n8n Instance running regularly 🛠 Configuration Google Sheets Sheet 1 (Server_List): Your list of servers. Sheet 2 (Server_Status_Alive): Log for reachable servers. Sheet 3 (Server_Status_Down): Log for unreachable servers. Gmail Integration Connect your Gmail account in the Server Down Notification node. Edit recipient email and message content as needed. HTTP Check Adjust the HTTP request URL template if using port numbers or paths (e.g., http://{{Server}}:8080/status). Schedule Default is every 1 minute. Change via Schedule Trigger if needed. 🧪 Testing Input a reachable server (e.g., example.com) and an unreachable IP. Run the workflow manually or wait for the next scheduled run. Check: Alive log updates correctly. Down log records failures. Email alert is received. 🚀 Deployment Activate the workflow, and it will quietly run in the background, notifying you of any server downtime instantly while keeping logs for future review.
by Ria
This workflow demonstrates how to use the workflowStaticData() function to set any type of variable that will persist within workflow executions. https://docs.n8n.io/code/cookbook/builtin/get-workflow-static-data/ This can be useful for example when working with access tokens that expire after a certain time period. Using staticData we can keep a record of that access token and the expiry time and build our workflow logic around it. Important Static Data only persists across production executions, i.e. triggered by Webhooks or Schedule Triggers (not manual executions!) For this the workflow will have to be activated. Setup configure HTTP Request node to fetch access token from your API (optional) activate workflow test the workflow with the webhook production link you can check the population of the static data in the single executions Feedback If you found this useful or want to report some missing information - I'd be happy to hear from you at ria@n8n.io
by Cameron Wills
Who is this for? Content creators, social media managers, digital marketers, and researchers who need to download original TikTok videos without watermarks for analysis, repurposing, or archiving purposes. What problem does this workflow solve? Downloading TikTok videos without watermarks typically requires using questionable third-party websites that may have limitations, ads, or privacy concerns. This workflow provides a clean, automated solution that can be integrated into your own systems and processes. What this workflow does This workflow automates the process of downloading TikTok videos without watermarks in three simple steps: Fetch the TikTok video page by providing the video URL Extract the raw video URL from the page's HTML data Download the original video file without watermark (Optional) Upload to Google Drive with public sharing link generation The workflow uses web scraping techniques to extract the original video source directly from TikTok's own servers, maintaining the highest possible quality without any added watermarks or branding. Setup (Est. time: 5-10 minutes) Before getting started, you'll need: n8n installation The URL of a TikTok you want to download (Optional) Google Drive API enabled in Google Cloud Console with OAuth Client ID and Client Secret credentials if you want to use the upload feature How to customize this workflow to your needs Replace the example TikTok URL with your desired video links Modify the file naming convention for downloaded videos Integrate with other nodes to process videos after downloading Create a webhook to trigger the workflow from external applications Set up a schedule to regularly download videos from specific accounts This workflow can be extended to support various use cases like trending content analysis, competitor research, creating compilation videos, or building a content library for inspiration. It provides a foundation that can be customized to fit into larger automated workflows for content creation and social media management.
by Guillaume Duvernay
Description This template provides a simple and powerful backend for adding speech-to-text capabilities to any application. It creates a dedicated webhook that receives an audio file, transcribes it using OpenAI's gpt-4o-mini model, and returns the clean text. To help you get started immediately, you'll find a complete, ready-to-use HTML code example right inside the workflow in a sticky note. This code creates a functional recording interface you can use for testing or as a foundation for your own design. Who is this for? Developers:** Quickly add a transcription feature to your application by calling this webhook from your existing frontend or backend code. No-code/Low-code builders:** Embed a functional audio recorder and transcription service into your projects by using the example code found inside the workflow. API enthusiasts:** A lean, practical example of how to use n8n to wrap a service like OpenAI into your own secure and scalable API endpoint. What problem does this solve? Provides a ready-made API:** Instantly gives you a secure webhook to handle audio file uploads and transcription processing without any server setup. Decouples frontend from backend:** Your application only needs to know about one simple webhook URL, allowing you to change the backend logic in n8n without touching your app's code. Offers a clear implementation pattern:** The included example code provides a working demonstration of how to send an audio file from a browser and handle the response—a pattern you can replicate in any framework. How it works This solution works by defining a clear API contract between your application (the client) and the n8n workflow (the backend). The client-side technique: Your application's interface records or selects an audio file. It then makes a POST request to the n8n webhook URL, sending the audio file as multipart/form-data. It waits for the response from the webhook, parses the JSON body, and extracts the value of the Transcript key. You can see this exact pattern in action in the example code provided in the workflow's sticky note. The n8n workflow (backend): The Webhook node catches the incoming POST request and grabs the audio file. The HTTP Request node sends this file to the OpenAI API. The Set node isolates the transcript text from the API's response. The Respond to Webhook node sends a clean JSON object ({"Transcript": "your text here..."}) back to your application. Setup Configure the n8n workflow: In the Transcribe with OpenAI node, add your OpenAI API credentials. Activate the workflow to enable the endpoint. Click the "Copy" button on the Webhook node to get your unique Production Webhook URL. Integrate with the frontend: Inside the workflow, find the sticky note labeled "Example Frontend Code Below". Copy the complete HTML from the note below it. ⚠️ Important: In the code you just copied, find the line const WEBHOOK_URL = 'YOUR WEBHOOK URL'; and replace the placeholder with the Production Webhook URL from n8n. Save the code as an HTML file and open it in your browser to test. Taking it further Save transcripts:* Add an *Airtable* or *Google Sheets** node to log every transcript that comes through the workflow. Error handling:** Enhance the workflow to catch potential errors from the OpenAI API and respond with a clear error message. Analyze the transcript:* Add a *Language Model** node after the transcription step to summarize the text, classify its sentiment, or extract key entities before sending the response.
by Agent Studio
Overview This workflow answers user requests sent via Mac Shortcuts Several Shortcuts call the same webhook, with a query and a type of query Types of query are: translate to english translate to spanish correct grammar (without changing the actual content) make content shorter make content longer How it works Select a text you are writing Launch the shortcut The text is sent to the webhook Depending on the type of request, a different prompt is used Each request is sent to an OpenAI node The workflow responds to the request with the response from GPT Shortcut replace the selected text with the new one For a demo and setup instructions: How to use it Activate the workflow Download this Shortcut template Install the shortcut In step 2 of the shortcut, change the url of the Webhook In Shortcut details, "add Keyboard Shortcut" with the key you want to use to launch the shortcut Go to settings, advanced, check "Allow running scripts" You are ready to use the shortcut. Select a text and hit the keyboard shortcut you just defined
by bangank36
This workflow restores all n8n instance workflows from GitHub backups using the n8n API node. It complements the Backup Your Workflows to GitHub template by allowing users to seamlessly restore previously saved workflows. How It Works The workflow fetches workflows stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the workflows repo.path** – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and workflows are stored in a workflows/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → workflows/ Required Credentials GitHub API** – Access to your repository n8n API** – To import workflows into your n8n instance Who Is This For? This template is ideal for users who want to restore their workflows from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates