by Christian Mendieta
🌟 Complete Workflow Overview The Full Blogging Automation Journey This N8N workflow transforms a simple topic request into a fully published, SEO-optimized blog post through a seamless 7-phase process. Starting with your topic idea, the system automatically researches, creates, optimizes, edits, and publishes professional content to your Ghost CMS website. Think of it as having an entire content team working 24/7 - from initial research to final publication, all orchestrated by AI agents working in perfect harmony. No more writer's block, no more SEO guesswork, just high-quality content that ranks and engages your audience 📋 Requirements & Setup What You Need to Get Started OpenAI API Key** - For GPT models (content generation) Anthropic API Key** - For Claude models as failover model Brave Search API Key** - For comprehensive research Ghost CMS Admin API Access** - For direct publishing Existing Blog Content** - Optional but recommended for better research 🔧 Workflow Architecture & Process How the AI Agents Work Together This N8N workflow implements a sophisticated multi-agent system where specialized AI agents collaborate through structured data exchange. The workflow uses HTTP Request nodes to communicate with OpenAI and Anthropic APIs, integrates with Brave Search for real-time research, and connects to Ghost CMS via REST API calls. Each agent operates independently but shares data through N8N's workflow context, ensuring seamless information flow from research to publication. The system includes error handling, retry logic, and quality gates at each stage to maintain content standards.
by Biznova
AI Email Generator with Tone Selection Made by Biznova on Tiktok 📧 What This Does This workflow creates a professional email generator that allows users to: Choose from multiple tones (Professional, Friendly, Formal, Casual) Input recipient details, subject, and context Generate a complete, well-formatted email using AI 👥 Who's It For Business professionals who need to write emails quickly Customer support teams responding to inquiries Sales teams crafting outreach messages Anyone who wants help writing professional emails 🎯 How It Works User fills out a form with email details and selects a tone The workflow processes the input and creates an AI prompt OpenAI generates a complete email based on the tone The formatted email is displayed for the user to copy ⚙️ Setup Requirements OpenAI API key (get one at https://platform.openai.com) n8n instance (cloud or self-hosted) 🚀 How to Use Set up your OpenAI credentials in the "OpenAI Chat Model" node Activate the workflow Share the form URL with users Users fill out the form and receive a generated email instantly 🔧 Setup Steps 1. OpenAI API Key Go to https://platform.openai.com/api-keys Create a new API key Add it to the "OpenAI Chat Model" node credentials 2. Customize Tones (Optional) Edit the "Build AI Prompt" node Modify the tone instructions to match your needs Add new tones to the form dropdown 3. Adjust AI Settings (Optional) In "OpenAI Chat Model" node: Change model (gpt-4 for better quality) Adjust temperature (0.5-0.9) Modify max tokens for longer/shorter emails 4. Test the Workflow Click "Test workflow" button Fill out the form Check the generated email 5. Share the Form Activate the workflow Copy the form URL Share with your team or customers
by Cheng Siong Chin
How It Works This workflow automates daily contract monitoring, analysis, and negotiation by retrieving contract data, applying AI-driven legal analysis, identifying potential issues and risks, coordinating multi-agent negotiation workflows, and updating strategic plans. It continuously monitors contracts, performs GPT-4–based contract analysis with detailed risk identification, and flags problematic clauses and unfavorable terms. The system routes identified items to a negotiation agent for structured strategic discussion, applies financial impact analysis to evaluate deal implications, determines negotiation outcomes, logs decisions and results, and updates CAPEX and OPEX planning systems accordingly. Designed for legal departments, procurement teams, corporate counsel, and contract management offices, it supports automated contract risk assessment, informed negotiations, and data-driven strategic planning. Setup Steps Configure contract data source and set up daily monitoring schedule. Connect OpenAI GPT-4 API Set up negotiation agent credentials and financial modeling system connections. Define contract risk thresholds Prerequisites Contract management system or data source; OpenAI API key; negotiation agent access Use Cases Corporate legal departments automating contract risk assessment across portfolios Customization Adjust contract analysis criteria and risk thresholds Benefits Eliminates manual contract review, identifies hidden risks automatically
by Ehsan
How it works This template creates a fully automated "hands-off" pipeline for processing financial documents. It's perfect for small businesses, freelancers, or operations teams who want to stop manually entering invoice and receipt data. When you drop a new image/multiple images file into a specific Google Drive folder, this workflow automatically: Triggers and downloads the new file. Performs OCR on the file using a local AI model (Nanonets-OCR-s) to extract all the raw text. Cleans & Structures the raw text using a second local AI model (command-r7b). This step turns messy text into a clean, predictable JSON object. Saves the structured data (like InvoiceNumber, TotalAmount, IssueDate, etc.) to a new record in your Airtable base. Moves the original file to a "Done" or "Failed" folder to keep your inbox clean and organized. Requirements Google Drive Account:** For triggering the workflow and storing files. Airtable Account:** To store the final, structured data. Ollama (Local AI):** This template is designed to run locally for free. You must have Ollama running and pull two models from your terminal: ollama pull benhaotang/Nanonets-OCR-s:F16 ollama pull command-r7b:7b-12-2024-q8_0 How to set up Setup should take about 10-15 minutes. The workflow contains 7 sticky notes that will guide you step-by-step. Airtable: Use the link in the main sticky note ([1]) to duplicate the Airtable base to your own account. Ollama: Make sure you have pulled the two required models listed above. Credentials: You will need to add three credentials in n8n: Your Google Drive (OAuth2) credentials. Your Airtable (Personal Access Token) credentials. Your Ollama credentials. (To do this, create an "OpenAI API" credential, set the Base URL to your local server (e.g., http://localhost:11434/v1), and use ollama as the API Key). Follow the Notes: Click through the workflow and follow the numbered sticky notes ([1] to [6]) to connect your credentials and select your folders for each node. How to customize the workflow Use Cloud AI:** This template is flexible! You can easily swap the local Ollama models for a cloud provider (like OpenAI's GPT-4o or Anthropic's Claude 3). Just change the credentials and model name in the two AI nodes (OpenAI Chat Model and Data Cleaner). Add More Fields:** To extract more data (e.g., SupplierVATNumber), simply add the new field to the prompt in the Data Cleaner node and map it in the AirTable - Create a record1 node.
by Cheng Siong Chin
How It Works Automates daily tenant analytics, churn risk evaluation, and proactive retention by unifying tenant data from multiple sources, applying GPT-4–based risk scoring, detecting early churn indicators, routing high-risk tenants to retention specialists, and initiating targeted engagement campaigns. It retrieves tenant profiles, service requests, and feedback data, performs GPT-4 analysis with detailed churn risk insights, assesses engagement levels, escalates high-risk tenants to dedicated communication teams, delivers personalized loyalty incentives and engagement emails, and updates CRM systems and retention dashboards. Designed for property management companies and SaaS providers. Setup Steps Configure tenant data sources. Connect OpenAI GPT-4 API for risk analysis and churn prediction. Set up Gmail, Slack, and CRM credentials for communication and tracking. Define churn risk thresholds, retention messaging templates, and reward programs. Prerequisites Tenant/customer data source; service request system; feedback collection tool; Use Cases Property management automating tenant retention across portfolios; Customization Adjust churn risk algorithms and thresholds, Benefits Predicts churn before it happens, enables proactive retention
by Praneel S
Automate blog updates via Discord with GitHub and customizable AI chatbot > ⚠️ Disclaimer: This template uses the n8n-nodes-discord-trigger community node, which means it works only in self-hosted n8n instances.(works for both cloud and localhost) Who’s it for This workflow is designed for developers, bloggers, and technical writers who want a hands-free way to draft and publish blog posts directly from Discord. Instead of juggling multiple tools, you just send a message to your Discord bot, and the workflow creates a properly formatted Markdown file in your GitHub repo. How it works Listens for new messages in a Discord channel or DM using the Discord Trigger (community node). Passes your message to an AI chatbot model (Google Gemini, OpenAI GPT, or any other connector you prefer) to draft or format the content. Uses GitHub nodes to check existing files, read repo contents, and create new .md posts in the specified directory. Adds the correct timestamp with the Date & Time node. Sends a confirmation reply back to Discord(Regular Message Node). Guardrails ensure it only creates new Markdown files in the correct folder, without overwriting or editing existing content. How to set up Import the workflow (or download the file here BlogAutomationclean.json) into your self-hosted n8n. Install the n8n-nodes-discord-trigger community node inside n8n workflow dashboard(click the link for the steps of setup). Create credentials for: Discord bot trigger from the community node Discord bot send Message from the Regular Discord Message Node GitHub (personal access token with repo permissions) Your AI provider (Gemini, OpenAI, etc.) Update the GitHub nodes with: Owner → your GitHub username Repo → your blog repo name Path → target directory for new Markdown posts Customize the AI agent’s system prompt to match your tone and workflow. (Default prompt included below.) Test it in a private Discord channel before going live. Requirements Self-hosted n8n instance(works both on cloud and localhost) GitHub repository with write access Discord bot credentials(**BOTH ARE REQUIRED: COMMUNITY NODE FOR TRIGGER AND REGULAR NODE** read below for reasoning) AI model credentials (Gemini, OpenAI, or other supported provider) How to customize the workflow Swap the AI model node for any provider you like: Gemini, OpenAI, or even a local LLM. Adjust the prompt to enforce your blog style guide. Add additional steps like auto-publishing, Slack notifications, or Notion syncs. Modify the directory path or file naming rules to fit your project. Reason for Using The Community Discord Trigger Node and Regular Discord Message Node From Testing, the Community Discord node cannot send big messages(Has a Certain limit), while the Original/Regular Discord Message Node can send far beyond that amount which helps for viewing Files. Feel Free to use both trigger and Send Message from the community node if facing issues, it will still work flawless other than message limit Default Prompt Core Identity & Persona You are the n8n Blog Master, a specialized AI agent. Your primary function is to assist your user with content management. Your Mission:** Automate the process of creating, formatting, editing, and saving blog posts as Markdown files within the user’s specified repository. User Clarification:* The repository owner always refers to your *user* and, in the context of API calls, the *repository owner**. It is never part of a file path. Personality:** Helpful, precise, security-conscious. Semi-casual and engaging, but never overly cheerful. Operational Zone & Constraints Repository:* You may only interact with the repository *<insert-repo-name-here>**. Owner:* The repository owner is *<insert-username-here>**. Branch:** Always operate on the main branch. Directory Access:* You can *only* write or edit files in the directory *<insert-directory-path-here>**. You are forbidden from interacting elsewhere. File Permissions:** You may create new .md files. If a file already exists, notify the user and ask if they want to edit it. Editing is only allowed if the user explicitly confirms (e.g., “yes”, “go ahead”, “continue”). If the user confirms, proceed with editing. Available Tools & Usage Protocol You have a limited but well-defined toolset. Always use them exactly as described: 1. Date & Time Tool Purpose: Always fetch the current date and time in IST (India Standard Time). Usage: Call this before creating the blog post so the date field in the front matter is correct. Do not use any other timezone. 2. GitHub Nodes Create:* Used to create new files within *<insert-directory-path-here>**. Requires three parameters: owner → always <insert-username-here> repo → always <insert-repo-name-here> path → must be <insert-directory-path-here>/<filename>.md List:* Can list files inside *<insert-directory-path-here>**. Use it to check existing filenames before creating new ones. Read:** Can fetch contents of files if needed. Edit:* Can update a specific file in *<insert-directory-path-here>**. Protocol: Before editing, explicitly ask: “Are you sure you want me to edit <filename>.md?” If the user responds with “yes”, “continue”, or similar confirmation, proceed. If the user declines, do nothing. Constraint: Never attempt operations outside the specified directory. 3. Data Storage & Message History Purpose: Store temporary user confirmations and recall previous user messages as part of memory. Example: If you ask for edit confirmation and the user replies “yes” or “continue”, record that in storage. If later in the same conversation the user says “go ahead” without repeating the filename, check both storage and previous messages to infer intent. Always reset confirmation after the action is completed. Standard Workflow: Creating or Editing Blog Posts Activation: Begin when the user says: “Draft a new post on…” “Make the body about…” “Use my rough notes…” “Modify it to include…” “Edit the file…” Information Gathering: Ask for the Title (mandatory for new posts). Gather topic, points, or raw notes from the user. If user provides incomplete notes, expand them into a coherent, well-structured article. Drafting & Formatting: Call the Date & Time tool. Format posts in the following template: --- title: "The Title Provided by the User" date: "YYYY-MM-DD" --- [Well-structured blog content goes here. Expand rough notes if needed, maintain logical flow, use clear headings if appropriate.] Thanks for Reading! Writing rules: Tone: Neutral, informative, lightly conversational — not too cheerful. Flow: Use line breaks for readability. Expansion: If notes are provided, polish and structure them. Modification: If asked, revise while preserving original meaning. File Naming: Generate a short kebab-case filename from the title (e.g., "Making My Own Java CLI-Based RPG!" → java-cli-rpg.md). File Creation vs Editing: If creating → Use the GitHub Create tool. If file already exists → Ask the user if they want to edit it. Store their response in Data Storage. If confirmation = yes → proceed with GitHub Edit tool. If no → cancel operation. Final Action: Confirm success to the user after creation or editing. Advanced Error Handling: "Resource Not Found" If the create_github_file tool fails with "Resource not found": First Failure: Notify the user that the attempt failed. State the exact path used. Retry automatically once. Second Failure: If it fails again, explain that standard creation isn’t working. Suggest it may be a permissions issue. Await user instructions before proceeding further. Contact and Changes Feel Free To Contribute to it I do not own anything made here, everything was made by their respective owners Shout-out to katerlol for making the discord Node Trigger Contact me Here if you need any help!
by Recrutei Automações
Overview: Recrutei ATS - Ultimate Internal AI Agent This workflow transforms Slack into a powerful command center for recruitment. Using an AI Agent (LangChain) integrated with the Recrutei ATS API and MCP, your team can manage candidates, vacancies, tags and a lot more directly through Slack messages. Key Features Natural Language Processing:** Use GPT-4 to interpret complex requests like "Find candidates for the Python role and tag them as 'Senior'". Candidate Management:** Create prospects, disqualify candidates, or move them between pipeline stages. Vacancy Insights:** Add and read comments on vacancies directly from the chat. Tagging System:** Create, list, and delete tags dynamically. Setup Instructions Slack Trigger: Connect your Slack account and invite the bot to your desired channel. OpenAI: Configure your API Key. This agent uses GPT-4o-mini (or GPT-4) for high-reasoning capabilities. HTTP Request Tools: Every "Tool" node (Pink nodes) needs your Recrutei API Token. Replace the Authorization header with your Bearer YOUR_TOKEN. Update the Base URL if necessary. Slack Post: Ensure the bot has permissions to write in the channel.
by Matt Chong
Who is this for? This workflow is for professionals, entrepreneurs, or anyone overwhelmed by a cluttered Gmail inbox. If you want to automatically archive low-priority emails using AI, this is the perfect hands-free solution. What does it solve? Your inbox fills up with old, read emails that no longer need your attention but manually archiving them takes time. This workflow uses AI to scan and intelligently decide whether each email should be archived, needs a reply, or is spam. It helps you: Declutter your Gmail inbox automatically Identify important vs. unimportant emails Save time with smart email triage How it works A scheduled trigger runs the workflow (you set how often). It fetches all read emails older than 45 days from Gmail. Each email is passed to an AI model(GPT-4) that classifies it as: Actionable Archive If the AI recommends archiving, the workflow archives the email from your inbox. All other emails are left untouched so you can review them as needed. How to set up? Connect your Gmail (OAuth2) and OpenAI API credentials. Open the "Schedule Trigger" node and choose how often the workflow should run (e.g., daily, weekly). Optionally adjust the Gmail filter in the “List Old Emails” node to change which emails are targeted. Start the workflow and let AI clean up your inbox automatically. How to customize this workflow to your needs Change the Gmail filter**: Edit the query in the Gmail node to include other conditions (e.g., older_than:30d, specific labels, unread only). Update the AI prompt**: Modify the prompt in the Function node to detect more nuanced categories like “Meeting Invite” or “Newsletter.” Adjust schedule frequency**: Change how often the cleanup runs (e.g., hourly, daily).
by Tomohiro Goto
🧠 How it works This workflow enables automatic translation in Slack using n8n and OpenAI. When a user types /trans followed by text, n8n detects the language and replies with the translated version via Slack. ⚙️ Features Detects the input language automatically Translates between Japanese ↔ English using GPT-4o-mini (temperature 0.2 for stability) Sends a quick “Translating...” acknowledgement to avoid Slack’s 3s timeout Posts the translated text back to Slack (public or private selectable) Supports overrides like en: こんにちは or ja: hello 💡 Perfect for Global teams communicating in Japanese and English Developers learning how to connect Slack + OpenAI + n8n 🧩 Notes Use sticky notes inside the workflow for setup details. Duplicate and modify it to support mentions, group messages, or other language pairs.
by Dean Gallop
This workflow automates the end-to-end process of generating and publishing blog posts from live news headlines. Fetch Headlines – Collects the latest top stories from Google News and GDELT, merges them, and removes duplicates. Headline Selection & Classification – Picks top headlines, checks relevance, and applies style rules. Draft Generation – Uses GPT to create an initial blog article in a natural, human tone. Tone & Expansion – Refines the draft for clarity, length, and style (customized to your own writing voice). Image Generation – Sends the article topic to Leonardo AI, waits for the image to finish rendering, and retrieves the final asset. Publish to WordPress – Uploads the generated image, assigns alt-text, creates a WordPress post with the article and image, and logs the publication to Google Sheets for tracking. Purpose Designed for hands-off content automation, this workflow continuously produces SEO-ready blog posts enriched with AI-generated images and publishes them directly to WordPress. It’s ideal for: Running an automated blog that reacts to trending news. Keeping websites updated with fresh, styled content without manual effort. Creating a repeatable content engine that combines research, writing, and media assets in one pipeline. Setup Instructions: Add Your Credentials Go to Credentials in n8n and create: OpenAI (for article generation) Leonardo AI (for image generation) WordPress (to publish posts) (Optional) Google Sheets (to log published articles) Attach these credentials to the matching nodes in the workflow. Check the WordPress Node Update the WordPress site URL to your own blog. Make sure the category, tags, and status (publish/draft) are set the way you want.
by Parhum Khoshbakht
Who's it for Product managers, customer success teams, and UX researchers who collect feedback in Google Sheets and want to automatically categorize and analyze it with sentiment and emotions insights. Ideal for teams processing dozens or hundreds of customer comments daily. Read on Medium Watch on Youtube What it does This workflow automatically tags and analyzes customer feedback stored in Google Sheets using OpenAI's GPT-4. It reads unprocessed feedback entries, sends them in batches to OpenAI for intelligent tagging and sentiment analysis, then writes the results back to your sheet—complete with up to 3 relevant tags per feedback, sentiment scores (Very Negative to Very Positive), and emotional analysis. The workflow uses batch processing to optimize API costs: instead of sending 10 separate requests, it sends one request with all 10 feedbacks, reducing API calls by ~90%. How it works Fetches allowed tags from a Tags sheet and new feedback entries (where Status is empty) from a Feedbacks sheet Merges tags with feedbacks and processes them in batches of 10 Sends each batch to OpenAI with a structured prompt requesting tags, sentiment, and emotional analysis Writes results back to Google Sheets with tags, sentiment, emotions, and timestamps Requirements Google Sheets account with OAuth2 credentials OpenAI API account (uses GPT-4o-mini model) Google Sheet template with two sheets: "Tags" (single column) and "Feedbacks" (with columns: Feedbacks, Status, Tag 1-3, Sentiment, Primary/Secondary Emotion, AI Tag 1-2, Updated Date) Setup instructions Duplicate the provided Google Sheet template Connect your credentials in n8n: Google Sheets OAuth2 API OpenAI API Update Sheet URLs in these nodes: "Fetch Allowed Tags" - point to your Tags sheet "Fetch New Feedbacks" - point to your Feedbacks sheet "Update Google Sheet (Tagged)" - point to your Feedbacks sheet Test manually first using the Manual Trigger, then enable the Schedule Trigger for automatic processing every 60 minutes How to customize Adjust batch size**: Change the batch size in "Process Feedbacks in Batches" (default: 10) to process more or fewer feedbacks per API call Modify tagging logic**: Edit the system prompt in "Tag Feedbacks with AI" to change how tags are selected or add custom sentiment categories Change schedule**: Update the Schedule Trigger interval (default: 60 minutes) based on your feedback volume Extend the workflow**: Add nodes to send results to Slack, Notion, or Airtable for real-time alerts on negative feedback
by Robert Breen
This workflow automates the process of finding YouTube creator contact emails for outreach and partnerships. It combines Apify scrapers with OpenAI to deliver a clean list of emails from channel descriptions: Step 1:** Search YouTube with Apify based on a keyword or topic Step 2:** Scrape each channel for descriptions and metadata Step 3:** Use OpenAI to extract and format valid email addresses into a structured JSON output This is useful for influencer outreach, creator collaborations, UGC sourcing, or lead generation — all automated inside n8n. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Set Up Apify Connection Go to Apify Console and sign up/login Get your API token here: Apify API Keys Set up the two scrapers in your Apify account: YouTube Scraper by streamers YouTube Scraper by apidojo In n8n, create a HTTP Query Auth credential Query Key: token Value: YOUR_APIFY_API_KEY Attach this credential to both HTTP Request nodes (Search YouTube and Scrape Channels) 📬 Contact Information Need help customizing this workflow or building similar automations? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com