by Yang
What this workflow does This workflow automatically turns new technical video uploads into short, engaging Facebook post drafts—complete with a suggested image—and saves the results to Google Sheets for quick review or publishing. It’s designed to help you repurpose tutorial or demo videos into ready-to-use social content without any manual writing or design effort. What problem is this workflow solving? Manually writing Facebook posts for every new tutorial or product video takes time, especially when you want them to be engaging and consistent. This workflow solves that by using AI to watch for new videos, extract meaningful insights, and write posts and create visuals automatically—saving hours of work. Who is this for? This workflow is ideal for: Content creators uploading tutorial videos Marketing teams working with how-to or product videos Agencies and automation pros building scalable social workflows for clients How it works Trigger: Starts when a new video is uploaded to a specific Google Drive folder. Download & Convert: Downloads the video and converts it to base64. Extract Insights: Dumpling AI analyzes the video and extracts structured insights such as topic, tools mentioned, and key steps. Generate Post: GPT-4o creates a short, friendly Facebook post using those insights, along with an image prompt. Create Visual: Dumpling AI generates an image using the prompt. Save to Sheet: The Facebook post and image URL are saved to a Google Sheet. Setup Create a Google Sheet to store the posts and images. Connect your Google Drive, Google Sheets, Dumpling AI, and OpenAI credentials in n8n. Update the workflow with: Your Google Drive folder ID Your target Google Sheet ID (Optional) Edit the prompt used in the GPT node if you want a different tone, style, or structure for the post. How to customize the workflow Change the platform**: Replace “Facebook” in the prompt with LinkedIn, Instagram, or another platform. Use a different image tool**: You can swap Dumpling AI for any other image generation API (e.g. DALL·E, Midjourney via webhook). Add auto-publishing**: Add a Facebook or social media module to publish the generated post directly instead of just saving to Google Sheets. Tag videos by content type**: Use AI to classify videos into categories and store them in separate tabs or sheets.
by LukaszB
This workflow is designed for freelancers, solopreneurs, and business owners who receive a high volume of irrelevant messages in their Gmail inbox — from cold offers to spammy promotions — and want to automatically filter and delete them using AI. Its main purpose is to scan new emails with the help of OpenAI, classify their content, and automatically delete those considered marketing (OFFER) or junk (SPAM). The result is a cleaner inbox without the need to manually sift through low-value messages. The classification logic uses a detailed system prompt with practical examples, so even complex or borderline messages are categorized accurately. Important emails — such as payment confirmations, shipping updates, or genuine business inquiries — remain untouched. This helps maintain a professional inbox with only valuable and relevant communication. The entire process runs automatically in the background and can be customized further — for example, to archive instead of delete, or log deleted emails for review. How it works When triggered (every hour), the workflow fetches new Gmail messages using the Gmail Trigger node. Each message is passed to an AI classifier powered by OpenAI, which reads the message body (email snippet) and returns one of three labels: SPAM: Obvious junk messages, scams, or low-effort bulk messages OFFER: Cold outreach, discount promotions, cart reminders, or generic advertising IMPORTANT: Valuable information for the user, even if commercial (e.g., invoices, order updates, personal inquiries) The workflow then routes the result through an IF node. If the message is marked as SPAM or OFFER, it is immediately deleted from Gmail via the Gmail Delete node. Emails marked as IMPORTANT are ignored and remain in the inbox. The classification is entirely AI-driven based on message content — sender address, headers, or metadata are not used. How to set up To get started, simply connect two credentials: A Gmail account using OAuth2 (via the Gmail Trigger and Gmail Delete nodes) An OpenAI API key (used by the AI classifier node) No advanced setup is needed beyond these two connections. Optionally, you can review or modify the system prompt used for classification — it’s available inside the workflow’s LangChain AI Agent node. The prompt is in English, so it’s recommended to use this workflow with English-language emails for best results. By default, the workflow deletes matching emails immediately. If you prefer safer testing, you can modify the Gmail node to archive, label, or log emails instead of deleting them. The full workflow takes around 5–10 minutes to configure and includes a sticky note with additional instructions and warnings.
by Gerald Denor
AI-Powered Proposal Generator - Sales Automation Workflow Overview This n8n workflow automates the entire proposal generation process using AI, transforming client requirements into professional, customized proposals delivered via email in seconds. Use Case Perfect for agencies, consultants, and sales teams who need to generate high-quality proposals quickly. Instead of spending hours writing proposals manually, this workflow captures client information through a web form and uses GPT-4 to generate contextually relevant, professional proposals. How It Works Form Trigger - Captures client information through a customizable web form OpenAI Integration - Processes form data and generates structured proposal content Google Drive - Creates a copy of your proposal template Google Slides - Populates the template with AI-generated content Gmail - Automatically sends the completed proposal to the client Key Features AI Content Generation**: Uses GPT-4 to create personalized proposal content Professional Templates**: Integrates with Google Slides for polished presentations Automated Delivery**: Sends proposals directly to clients via email Form Integration**: Captures all necessary client data through web forms Customizable Output**: Generates structured proposals with multiple sections Template Sections Generated Proposal title and description Problem summary analysis Three-part solution breakdown Project scope details Milestone timeline with dates Cost integration Requirements n8n instance** (cloud or self-hosted) OpenAI API key** for content generation Google Workspace account** for Slides and Gmail Basic n8n knowledge** for setup and customization Setup Complexity Intermediate - Requires API credentials setup and basic workflow customization Benefits Time Savings**: Reduces proposal creation from hours to minutes Consistency**: Ensures all proposals follow the same professional structure Personalization**: AI analyzes client needs for relevant content Automation**: Eliminates manual copy-paste and formatting work Scalability**: Handle multiple proposal requests simultaneously Customization Options Modify AI prompts for different industries or services Customize Google Slides template design Adjust form fields for specific information needs Personalize email templates and signatures Configure milestone templates for different project types Error Handling Includes basic error handling for API failures and form validation to ensure reliable operation. Security Notes All credentials have been removed from this template. Users must configure their own: OpenAI API credentials Google OAuth2 connections for Slides, Drive, and Gmail Form webhook configuration This workflow demonstrates practical AI integration in business processes and showcases n8n's capabilities for complex automation scenarios.
by bangank36
This workflow restores all n8n instance workflows from GitHub backups using the n8n API node. It complements the Backup Your Workflows to GitHub template by allowing users to seamlessly restore previously saved workflows. How It Works The workflow fetches workflows stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the workflows repo.path** – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and workflows are stored in a workflows/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → workflows/ Required Credentials GitHub API** – Access to your repository n8n API** – To import workflows into your n8n instance Who Is This For? This template is ideal for users who want to restore their workflows from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Jonathan | NEX
Supercharge Your Security Operations for Free Stop wasting time manually investigating suspicious IP addresses. This workflow template is your launchpad to automating real-time IP cybersecurity analysis using the NixGuard platform, which you can use for free. This is the first of a two-part system designed to integrate seamlessly into your existing security stack, especially with Wazuh. It calls our main workflow, Automate IP Reputation Checks and Get AI Risk Summaries from NixGuard, to do the heavy lifting. What This Workflow Unlocks for You Free AI-Powered Risk Summaries:** Don't just get data; get answers. NixGuard provides a clear, human-readable summary of why an IP is considered risky. Automated IP Reputation Checks:** Programmatically check any IP against a vast array of threat intelligence sources. A Foundation for Your SOC Automation:** Use the results to trigger your incident response process. The template includes a pre-built example of how to send a detailed alert to Slack, which you can easily adapt for Jira, TheHive, or any other tool. How the Two-Workflow System Works This "Dispatcher" workflow is designed for flexibility. It holds your API key and input, then calls the main analysis workflow. This allows you to easily create multiple triggers (e.g., one for Slack bots, one for webhooks) without duplicating the core logic. Critical Setup Instructions Get the Main Workflow: First, add the main analysis engine to your n8n instance from the community page: NixGuard Analysis Workflow. Add Your Free API Key: In this workflow, click the blue Set API Key & Initial Prompt node. Paste your free NixGuard API key into the apiKey value field. Connect The Workflows: Click the purple Execute NixGuard & Wazuh Workflow node. In the parameters, use the dropdown to select the main analysis workflow you added in Step 1. Ready to automate your threat intelligence? Get your free API key and learn more at; 🔗 Learn more about NixGuard: [thenex.world](thenex.world )🔗 Get started with a free security subscription: thenex.world/security/subscribe Tags: Free, IP Analysis, NixGuard, Wazuh, Security, Automation, AI, Cybersecurity, Threat Intelligence, SOC, Incident Response, IP Reputation, DevSecOps, API
by Ramsey Njire
Who Is This For? This workflow is perfect for content creators, marketers, and business professionals who receive regular newsletters and want to effortlessly convert them into engaging LinkedIn posts. By automating the extraction and repurposing process, you can save time and consistently share thoughtful updates with your network. What Problem Does This Workflow Solve? Manually reading newsletters, extracting the key points, and then formatting that content into professional, engaging LinkedIn posts can be time-consuming and error-prone. This workflow automates those steps by: Filtering Emails:** Uses the Gmail node to process only those emails from a specific sender (e.g., newsletter@example.com). Extracting Content:** Leverages OpenAI to identify and summarize the top news items in your newsletter. Generating Posts:** Crafts concise, insightful LinkedIn posts in a smart, deadpan style with a touch of subtle humor. Publishing:** Posts the generated content directly to LinkedIn. What This Workflow Does Filter Newsletters:** The Gmail node is set up to only handle emails from your chosen sender, ensuring that only relevant newsletters are processed. Extract Key Content:** An OpenAI node analyzes the newsletter text to pull out the most important news items, including headlines and summaries. Split Content:** A Split Out node divides the extracted content so each news item is processed on its own. Generate LinkedIn Posts:** Another OpenAI node takes each news item's details and produces a well-structured LinkedIn post that delivers practical insights and ends with a reflective observation or question. Publish to LinkedIn:** The LinkedIn node publishes the crafted posts directly to your account. Setup Gmail Node: Rename it to “Filter Gmail Newsletter” and configure it to filter emails by your newsletter sender. OpenAI Nodes: Ensure your OpenAI API credentials are set up correctly. Customize the prompt if needed to match your desired tone. LinkedIn Node: Rename it to “Post to LinkedIn” and confirm that your LinkedIn OAuth2 credentials are properly configured. How to Customize OpenAI Prompts:** Adjust the prompts in the OpenAI nodes to fine-tune the post tone and output formatting. Email Filter:** Change the Gmail filter to match the sender of your newsletters. Post Processing:** Optionally, add extra formatting (using Function nodes) to further enhance the readability of the generated LinkedIn posts. This template offers an automated, hands-off solution to transform your newsletter content into engaging LinkedIn updates, keeping your audience informed and inspired with minimal effort.
by Cameron Wills
Who is this for? Content creators, social media managers, digital marketers, and researchers who need to download original TikTok videos without watermarks for analysis, repurposing, or archiving purposes. What problem does this workflow solve? Downloading TikTok videos without watermarks typically requires using questionable third-party websites that may have limitations, ads, or privacy concerns. This workflow provides a clean, automated solution that can be integrated into your own systems and processes. What this workflow does This workflow automates the process of downloading TikTok videos without watermarks in three simple steps: Fetch the TikTok video page by providing the video URL Extract the raw video URL from the page's HTML data Download the original video file without watermark (Optional) Upload to Google Drive with public sharing link generation The workflow uses web scraping techniques to extract the original video source directly from TikTok's own servers, maintaining the highest possible quality without any added watermarks or branding. Setup (Est. time: 5-10 minutes) Before getting started, you'll need: n8n installation The URL of a TikTok you want to download (Optional) Google Drive API enabled in Google Cloud Console with OAuth Client ID and Client Secret credentials if you want to use the upload feature How to customize this workflow to your needs Replace the example TikTok URL with your desired video links Modify the file naming convention for downloaded videos Integrate with other nodes to process videos after downloading Create a webhook to trigger the workflow from external applications Set up a schedule to regularly download videos from specific accounts This workflow can be extended to support various use cases like trending content analysis, competitor research, creating compilation videos, or building a content library for inspiration. It provides a foundation that can be customized to fit into larger automated workflows for content creation and social media management.
by Airtop
Automating LinkedIn Company URL Verification Use Case This automation verifies that a given LinkedIn URL actually belongs to a company by comparing the website listed on their LinkedIn page against the expected company domain. It is essential for ensuring data accuracy in lead qualification, enrichment, and CRM updates. What This Automation Does Input Parameters Company LinkedIn**: The LinkedIn URL to be verified. Company Domain**: The expected domain (e.g., example.com) for validation. Airtop Profile (connected to LinkedIn)**: Airtop Profile with LinkedIn authentication. Output Confirmation whether the LinkedIn page corresponds to the provided domain. Returns the verified LinkedIn URL if the match is confirmed. How It Works Extracts the website URL from the specified LinkedIn company profile. Compares the extracted URL with the provided company domain. If the domain is contained in the extracted website, the LinkedIn profile is confirmed as valid. Returns the original LinkedIn URL if the match is successful. Setup Requirements Airtop API Key LinkedIn-authenticated Airtop Profile Next Steps Use for LinkedIn Discovery Validation**: Ensure correctness after automated LinkedIn page discovery. Combine with CRM Updates**: Prevent incorrect LinkedIn links from being stored in CRM. Automate in Data Pipelines**: Use this as a validation gate before enrichment or scoring steps.
by Msaid Mohamed el hadi
📸 Instagram Full Profile Scraper with Apify and Google Sheets This n8n workflow automates the process of scraping full Instagram profiles using a custom Apify actor, and logs the results into a Google Sheet. It is designed to run at scheduled intervals and process a list of usernames by calling the API, appending the results, and marking them as processed. 🚀 Features ⏱ Scheduled Execution – Runs automatically every few minutes. 📄 Google Sheets Integration – Reads a list of Instagram usernames and updates the same sheet. 🧠 Apify Actor – Fetches full public Instagram profile data. 🧮 Aggregation – Batches usernames for bulk scraping. ✍️ Data Logging – Appends scraped data to a second sheet. ✅ Tracking – Marks usernames as processed once scraped. 📁 Workflow Structure graph TD; ScheduleTrigger --> GetUsernames; GetUsernames --> LimitItems; LimitItems --> AggregateUsernames; AggregateUsernames --> CallApifyActor; CallApifyActor --> AppendToSheet; CallApifyActor --> MarkAsScraped; 🛠 Setup Google Sheet Create a Google Sheet with: Sheet 1 named Usernames (GID: 0) Columns: username, scraped Sheet 2 named fullprofiles (GID: 458127000) Sample sheet: 🔗 Instagram Profile Sheet n8n Configuration Import this workflow into your n8n instance. Set up your Google Sheets credentials (googleSheetsOAuth2Api). Replace apify_api_your token in the HTTP Request node with your Apify API token. 📦 Required Credentials Google Sheets OAuth2** – For reading and writing sheet data. Apify API Token** – To call the custom actor for profile scraping. 📊 Sheets Used | Sheet Name | Purpose | | -------------- | -------------------------------- | | Usernames | Source of usernames to scrape | | fullprofiles | Destination of full profile data | 📌 Apify Actor Info > Instagram Full Profile Scraper > This actor fetches extended profile information from public Instagram profiles. 🔗 View on Apify 🔁 Workflow Nodes Overview | Node | Purpose | | ------------------------ | ----------------------------------------------------------------- | | Schedule Trigger | Triggers the workflow periodically. | | Get Usernames | Reads usernames from the Usernames sheet. | | Limit | Limits processing to 20 usernames per run. | | Aggregate | Groups usernames into a batch for the API call. | | Call Apify Actor | Sends the usernames to the Apify actor and receives profile data. | | Append Full Profiles | Appends the scraped data to the fullprofiles sheet. | | Mark Username as Scraped | Marks the processed usernames as scraped = TRUE. | | Sticky Note | Provides a reference link to the Apify actor used. | 📌 Example Sheet Structure Usernames Sheet | username | scraped | | ------------ | ------- | | exampleuser1 | | | exampleuser2 | TRUE | fullprofiles Sheet | username | full\_name | biography | follower\_count | ... | | -------- | ---------- | --------- | --------------- | --- | 🔐 Security & Notes This workflow does not bypass any Instagram privacy restrictions. It works only with public Instagram profiles. You are responsible for ensuring that scraping complies with Instagram’s terms of service and any applicable laws. 📬 Support For any issues, feel free to reach out: 👤 @mohamedgb00714 📧 mohamedgb00714@gmail.com
by Niklas Hatje
Use Case In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas and collect them in Notion. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Notion, and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Add a Database in Notion with the columns Name and Creator Add your Notion credentials and add the integration to your Notion page. Fill the setup node below Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Notion and for example, add different types of ideas or areas that they impact You might wanna add different templates in Notion to make it easier for users to fill their ideas with details Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (see this template and this template) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters (see here)
by Daniel Shashko
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates the process of scraping product data from e-commerce websites and using it to fine-tune a custom OpenAI GPT model for generating high-quality marketing copy and product descriptions. Main Use Cases Fine-tune OpenAI models with real product data from hundreds of supported e-commerce websites for marketing content generation. Create custom AI models specialized in writing compelling product descriptions across different industries and platforms. Automate the entire pipeline from data collection to model training using Bright Data's extensive scraper library. Generate marketing copy using your custom-trained model via an interactive chat interface. How it works The workflow operates in two main phases: model training and model usage, organized into these stages: Data Collection & Processing Manually triggered to start the fine-tuning process. Uses Bright Data's web scraper to extract product information from any supported e-commerce platform (Amazon, eBay, Shopify stores, Walmart, Target, and hundreds of other websites). Collects product titles, brands, features, descriptions, ratings, and availability status from your chosen platform. Easily customizable to scrape from different websites by simply changing the dataset configuration and product URLs. Training Data Preparation A Code node processes the scraped product data to create training examples in OpenAI's required JSONL format. For each product, generates a complete training example with: System message defining the AI's role as a marketing assistant. User prompt containing specific product details (title, brand, features, original description snippet). Assistant response providing an ideal marketing description template. Compiles all training examples into a single JSONL file ready for OpenAI fine-tuning. Model Fine-Tuning Uploads the training file to OpenAI using the OpenAI File Upload node. Initiates a fine-tuning job via HTTP Request to OpenAI's fine-tuning API using the GPT-4o-mini model as the base. The fine-tuning process runs on OpenAI's servers to create your custom model. Interactive Chat Interface Provides a chat trigger that allows real-time interaction with your fine-tuned model. An AI Agent node connects to your custom-trained OpenAI model. Users can chat with the model to generate product descriptions, marketing copy, or other content based on the training. Custom Model Integration The OpenAI Chat Model node is configured to use your specific fine-tuned model ID. Delivers responses trained on your product data for consistent, high-quality marketing content. Summary Flow: Manual Trigger → Scrape E-commerce Products (Bright Data) → Process & Format Training Data (Code) → Upload Training File (OpenAI) → Start Fine-Tuning Job (HTTP Request) | Parallel: Chat Trigger → AI Agent → Custom Fine-Tuned Model Response Benefits: Fully automated pipeline from raw product data to trained AI model. Works with hundreds of different e-commerce websites through Bright Data's extensive scraper library. Creates specialized models trained on real e-commerce data for authentic marketing copy across various industries. Scalable solution that can be adapted to different product categories, niches, or websites. Interactive chat interface for immediate access to your custom-trained model. Cost-effective fine-tuning using OpenAI's most efficient model (GPT-4o-mini). Easily customizable with different websites, product URLs, training prompts, and model configurations. Setup Requirements: Bright Data API credentials for web scraping (supports hundreds of e-commerce websites). OpenAI API key with fine-tuning access. Replace placeholder credential IDs and model IDs with your actual values. Customize the product URLs list and Bright Data dataset for your specific website and use case. The workflow can be adapted for any e-commerce platform supported by Bright Data's scraping infrastructure.
by Guillaume Duvernay
Description This template provides a simple and powerful backend for adding speech-to-text capabilities to any application. It creates a dedicated webhook that receives an audio file, transcribes it using OpenAI's gpt-4o-mini model, and returns the clean text. To help you get started immediately, you'll find a complete, ready-to-use HTML code example right inside the workflow in a sticky note. This code creates a functional recording interface you can use for testing or as a foundation for your own design. Who is this for? Developers:** Quickly add a transcription feature to your application by calling this webhook from your existing frontend or backend code. No-code/Low-code builders:** Embed a functional audio recorder and transcription service into your projects by using the example code found inside the workflow. API enthusiasts:** A lean, practical example of how to use n8n to wrap a service like OpenAI into your own secure and scalable API endpoint. What problem does this solve? Provides a ready-made API:** Instantly gives you a secure webhook to handle audio file uploads and transcription processing without any server setup. Decouples frontend from backend:** Your application only needs to know about one simple webhook URL, allowing you to change the backend logic in n8n without touching your app's code. Offers a clear implementation pattern:** The included example code provides a working demonstration of how to send an audio file from a browser and handle the response—a pattern you can replicate in any framework. How it works This solution works by defining a clear API contract between your application (the client) and the n8n workflow (the backend). The client-side technique: Your application's interface records or selects an audio file. It then makes a POST request to the n8n webhook URL, sending the audio file as multipart/form-data. It waits for the response from the webhook, parses the JSON body, and extracts the value of the Transcript key. You can see this exact pattern in action in the example code provided in the workflow's sticky note. The n8n workflow (backend): The Webhook node catches the incoming POST request and grabs the audio file. The HTTP Request node sends this file to the OpenAI API. The Set node isolates the transcript text from the API's response. The Respond to Webhook node sends a clean JSON object ({"Transcript": "your text here..."}) back to your application. Setup Configure the n8n workflow: In the Transcribe with OpenAI node, add your OpenAI API credentials. Activate the workflow to enable the endpoint. Click the "Copy" button on the Webhook node to get your unique Production Webhook URL. Integrate with the frontend: Inside the workflow, find the sticky note labeled "Example Frontend Code Below". Copy the complete HTML from the note below it. ⚠️ Important: In the code you just copied, find the line const WEBHOOK_URL = 'YOUR WEBHOOK URL'; and replace the placeholder with the Production Webhook URL from n8n. Save the code as an HTML file and open it in your browser to test. Taking it further Save transcripts:* Add an *Airtable* or *Google Sheets** node to log every transcript that comes through the workflow. Error handling:** Enhance the workflow to catch potential errors from the OpenAI API and respond with a clear error message. Analyze the transcript:* Add a *Language Model** node after the transcription step to summarize the text, classify its sentiment, or extract key entities before sending the response.