by Robert Breen
🧑💻 Description This workflow automatically compares the version of your n8n instance with the latest release available. Keeping your n8n instance up-to-date is essential for security patches, bug fixes, performance improvements, and access to new automation features. By running this workflow, you’ll know right away if your instance is behind and whether it’s time to upgrade. After the comparison, the workflow clearly shows whether your instance is up-to-date or outdated, along with the version numbers for both. This makes it easy to plan updates and keep your automation environment secure and reliable. ⚙️ Setup Instructions 1️⃣ Set Up n8n API Credentials In your n8n instance → go to Admin Panel → API Copy your API Key In n8n → Credentials → New → n8n API Paste the API Key Save it Attach this credential to the n8n node (Set up your n8n credentials) ✅ How It Works Get Most Recent n8n Version** → Fetches the latest release info from docs.n8n.io. Extract Version + Clean Value** → Parses the version string for accuracy. Get your n8n version** → Connects to your own n8n instance via API and retrieves the current version. Compare* → Evaluates the difference and tells you if your instance is *current* or needs an *update**. 🎛️ Customization Guidance Notifications**: Add an Email or Slack node to automatically notify your team when a new n8n update is available. Scheduling: Use a **Schedule Trigger to run this workflow daily or weekly for ongoing monitoring. Conditional Actions**: Extend the workflow to log version mismatches into Google Sheets, or even trigger upgrade playbooks. Multi-Instance Tracking**: Duplicate the version-check step for multiple n8n environments (e.g., dev, staging, production). 💬 Example Output “Your instance (v1.25.0) is up-to-date with the latest release (v1.25.0).” “Your instance (v1.21.0) is behind the latest release (v1.25.0). Please update to get the latest bug fixes and features.” 📬 Contact Need help setting up API credentials or automating version checks across environments? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Dean Gallop
Trigger & Topic Extraction The workflow starts manually or from a chat/Telegram/webhook input. A “topic extractor” node scans the incoming text and cleans it (handles /topic … commands). If no topic is detected, it defaults to a sample news headline. Style & Structure Setup A style guide node defines the blog’s tone: practical, medium–low formality, clear sections, clean HTML only. It also enforces do’s (citations, links, actionable steps) and don’ts (no clickbait, no low-quality sources). Research & Drafting A GPT node generates a 1,700–1,800 word article following the style guide. Sections include: What happened, Why it matters, Opportunities/risks, Action plan, FAQ. The draft is then polished for clarity and flow. Quality Control A word count guard checks that the article is at least 1,600 words. If too short, a GPT “expand draft” node deepens the Why it matters, Risks, and Action plan sections. Image Creation The final article’s title and content are used to generate an editorial-style image prompt. Leonardo AI creates a cinematic, text-free featured image suitable for Google News/Discover. The image is uploaded to WordPress, with proper ALT text generated by GPT. Publishing to WordPress The final post (title, content, featured image) is automatically published. Sources are extracted from the article, compiled into a “Sources” section with clickable links. Posts are categorized and published immediately.
by Shohani
Auto backup n8n workflows to GitLab with AI-generated documentation This n8n template automatically backs up your workflows to a GitLab repository whenever they're updated or activated, and generates README documentation using AI. This workflow can be aslo added as a sub-workflow to any existing workflow to enable backup functionality. Who's it for This template is perfect for n8n users who want to: Maintain version control of their workflows Create automatic backups in Git repositories Generate documentation for their workflows using AI Keep their workflow library organized and documented How it works The workflow monitors n8n for workflow updates and activations, then automatically saves the workflow JSON to GitLab and generates a README file using OpenAI: Trigger Detection: Uses n8n Trigger to detect when workflows are updated or activated Workflow Retrieval: Fetches the complete workflow data using the n8n API Repository Check: Lists existing files in GitLab to determine if the workflow already exists Smart File Management: Either creates a new file or updates an existing one based on the repository state AI Documentation: Generates a README.md file using OpenAI's GPT model to document the workflow GitLab Storage: Saves both the workflow JSON and README to organized folders in your GitLab repository Requirements GitLab account** with API access and a repository named "all_projects" n8n API credentials** for accessing workflow data OpenAI API key** for generating documentation GitLab personal access token** with repository write permissions How to set up Configure GitLab credentials: Add your GitLab API credentials in the GitLab nodes Set up n8n API: Configure your n8n API credentials for the workflow retrieval node Add OpenAI credentials: Set up your OpenAI API key in the "Message a model" node Update repository details: Modify the owner and repository name in GitLab nodes to match your setup Test the workflow: Save and activate the workflow to test the backup functionality How to customize the workflow Change repository structure**: Modify the file path expressions to organize workflows differently Customize commit messages**: Update the commit message templates in GitLab nodes Enhance AI documentation**: Modify the OpenAI prompt to generate different styles of documentation Add file filtering**: Include conditions to backup only specific workflows Extend triggers**: Add webhook or schedule triggers for different backup scenarios Multiple repositories**: Duplicate GitLab nodes to backup to multiple repositories simultaneously
by Avkash Kakdiya
How it works This workflow starts when a user triggers a custom slash command in Slack. The workflow checks if a valid message (email address or HubSpot contact ID) was provided. Based on the input, it searches HubSpot for the contact either by email or by ID. Once the contact is found, the workflow formats the details into a clean, Slack-friendly message card and posts it back into the Slack channel. Step-by-step Start with Slack Slash Command The workflow is triggered whenever someone uses a custom slash command in Slack. It checks if the user actually entered something (email or ID). If nothing is entered, the workflow stops with an error. Parse Search Input The workflow cleans up the user’s input and determines whether it’s an email address or a HubSpot contact ID. This ensures the correct HubSpot search method is used. Search in HubSpot If the input is an email → the workflow searches HubSpot by email. If the input is an ID → the workflow retrieves the contact directly using the HubSpot contact ID. Format Contact Info The retrieved HubSpot contact details (name, email, phone, company, deal stage, etc.) are formatted into a Slack-friendly message card. Send Contact Info to Slack Finally, the formatted contact information is posted back into the Slack channel, making it instantly visible to the user and team. Why use this? Quickly look up HubSpot contacts directly from Slack without switching tools. Works with both email addresses and HubSpot IDs. Provides a clean, structured contact card in Slack with key details. Saves time for sales and support teams by keeping workflows inside Slack. Runs automatically once set up — no extra clicks or manual searches.
by Daniel
Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators**: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators**: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization**: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming**: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent**: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403)**: Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely**: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters**: Tweak agent prompt to enforce fuller structures like V1V2[C]; verify output length in executions
by Robert Breen
Pull recent Instagram post media for any username, fetch the image binaries, and run automated visual analysis with OpenAI — all orchestrated inside n8n. This workflow uses a Google Sheet to supply target usernames, calls Apify’s Instagram Profile Scraper to fetch recent posts, downloads the images, and passes them to an OpenAI vision-capable model for structured analysis. Results can then be logged, stored, or routed onward depending on your use case. 🧑💻 Who’s it for Social media managers analyzing competitor or brand posts Marketing teams tracking visual trends and campaign content Researchers collecting structured insights from Instagram images ⚙️ How it works Google Sheets – Supplies Instagram usernames (one per row). Apify Scraper – Fetches latest posts (images and metadata). HTTP Request – Downloads each image binary. OpenAI Vision Model – Analyzes visuals and outputs structured summaries. Filter & Split Nodes – Ensure only the right rows and posts are processed. 🔑 Setup Instructions 1) Connect Google Sheets (OAuth2) Go to n8n → Credentials → New → Google Sheets (OAuth2) Sign in with your Google account and grant access In the Get Google Sheet node, select your spreadsheet + worksheet (must contain a User column with Instagram usernames) 2) Connect Apify (HTTP Query Auth) Get your Apify API token at Apify Console → Integrations/API In n8n → Credentials → New → HTTP Query Auth, add a query param token=<YOUR_APIFY_TOKEN> In the Scrape Details node, select that credential and use the provided URL: 3) Connect OpenAI (API Key) Create an API key at OpenAI Platform In n8n → Credentials → New → OpenAI API, paste your key In the OpenAI Chat Model node, select your credential and choose a vision-capable model (gpt-4o-mini, gpt-4o, or gpt-5 if available) 🛠️ How to customize Change the Google Sheet schema (e.g., add campaign tags or notes). Adjust the OpenAI system prompt to refine what details are extracted (e.g., brand logos, colors, objects). Route results to Slack, Notion, or Airtable instead of storing only in Sheets. Apply filters (hashtags, captions, or timeframe) directly in the Apify scraper config. 📋 Requirements n8n (Cloud or self-hosted) Google Sheets account Apify account + API token OpenAI API key with a funded account 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your PDF)? 📧 rbreen@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Snehasish Konger
Target audience Solo creators, PMs, and content teams who queue LinkedIn ideas in Google Sheets and want them posted on a fixed schedule with AI-generated copy. How it works The workflow runs on a schedule (Mon/Wed/Fri at 09:30). It pulls the first Google Sheet row with Status = Pending, generates a LinkedIn-ready post from Post title using an OpenAI prompt, publishes to your LinkedIn profile, then updates the same row to Done and writes the final post back to the sheet. Prerequisites (use your own credentials) Google Sheets (OAuth2)** with access to the target sheet LinkedIn OAuth2* tied to the account that should post — set the *person** field to your profile’s URN in the LinkedIn node OpenAI API key** for the Chat Model node Store secrets in n8n Credentials. Never hard-code keys in nodes. Google Sheet structure (exact columns) Minimum required columns id — unique integer/string used to update the same row later Status — allowed values: Pending or Done Post title — short prompt/topic for the AI model Recommended columns Output post — where the workflow writes the final text (use this header, or keep your existing Column 5) Hashtags (optional) — comma-separated list (the prompt can append these) Image URL (optional) — public URL; add an extra LinkedIn “Create Post” input if you post with media later Notes (optional) — extra hints for tone, audience, or CTA Example header row id | Status | Post title | Hashtags | Image URL | Output post | Notes Example rows (inputs → outputs) 1 | Pending | Why I moved from Zapier to n8n | #automation,#nocode | | | Focus on cost + flexibility 2 | Done | 5 lessons from building a rules engine | #product,#backend | | This is the final posted text... | Resulting Output post (for row 1 after publish) I switched from Zapier to n8n for three reasons: control, flexibility, and cost. Here’s what changed in my stack and what I’d repeat if I had to do it again. #automation #nocode > If your sheet already has a column named Column 5, either rename it to Output post and update the mapping in the final Google Sheets Update node, or keep Column 5 as is and leave the node mapping untouched. Step-by-step Schedule Trigger Runs on Mon/Wed/Fri at 09:30. Fetch pending rows (Google Sheets → Get Rows) Reads the sheet and filters rows where Status = Pending. Limit Keeps only the first pending row so one post goes out per run. Writing the post (Agent + OpenAI Chat Model + Structured Output Parser) Uses Post title (and optional Notes/Hashtags) as input. The agent returns JSON with a post field. Model set to gpt-4o-mini by default. Create a post (LinkedIn) Publishes {{$json.output.post}} to the configured person (your profile URN). Update the sheet (Google Sheets → Update) Matches by id, sets Status = Done, and writes the generated text into Output post (or your existing output column). Customization Schedule** — change days/time in the Schedule node. Consider your n8n server timezone. Posts per run* — remove or raise the *Limit** to post more than one item. Style and tone** — edit the Agent’s system prompt. Add rules for line breaks, hashtags, or a closing CTA. Hashtags handling** — parse the Hashtags column in the prompt so the model appends them cleanly. Media posts** — add a branch that attaches Image URL (requires LinkedIn media upload endpoints). Company Page* — switch the *person* field to an *organization** URN tied to your LinkedIn app scope. Troubleshooting No post created** Check the If/Limit path: is there any row with Status = Pending? Confirm the sheet ID and tab name in the Google Sheets nodes. Sheet not updating** The Update node must receive the original id. If you changed field names, remap them. Make sure id values are unique. LinkedIn errors (403/401/404)** Refresh LinkedIn OAuth2 in Credentials. The person/organization URN is wrong or missing. Copy the exact URN from the LinkedIn node helper. App lacks required permissions for posting. Rate limit (429) or model errors** Add a short Wait before retries. Switch to a lighter model or simplify the prompt. Post too long or broken formatting** LinkedIn hard limit is \~3,000 characters. Add a truncation step in Code or instruct the prompt to cap length. Replace double line breaks in the LinkedIn node if you see odd spacing. Timezone mismatch** The Schedule node uses the n8n instance timezone. Adjust or move to a Cron with explicit TZ if needed. Need to post at a different cadence, or push two posts per day? Tweak the Schedule and Limit nodes and you’re set.
by Ehsan
How it works This template creates a fully automated "hands-off" pipeline for processing financial documents. It's perfect for small businesses, freelancers, or operations teams who want to stop manually entering invoice and receipt data. When you drop a new image/multiple images file into a specific Google Drive folder, this workflow automatically: Triggers and downloads the new file. Performs OCR on the file using a local AI model (Nanonets-OCR-s) to extract all the raw text. Cleans & Structures the raw text using a second local AI model (command-r7b). This step turns messy text into a clean, predictable JSON object. Saves the structured data (like InvoiceNumber, TotalAmount, IssueDate, etc.) to a new record in your Airtable base. Moves the original file to a "Done" or "Failed" folder to keep your inbox clean and organized. Requirements Google Drive Account:** For triggering the workflow and storing files. Airtable Account:** To store the final, structured data. Ollama (Local AI):** This template is designed to run locally for free. You must have Ollama running and pull two models from your terminal: ollama pull benhaotang/Nanonets-OCR-s:F16 ollama pull command-r7b:7b-12-2024-q8_0 How to set up Setup should take about 10-15 minutes. The workflow contains 7 sticky notes that will guide you step-by-step. Airtable: Use the link in the main sticky note ([1]) to duplicate the Airtable base to your own account. Ollama: Make sure you have pulled the two required models listed above. Credentials: You will need to add three credentials in n8n: Your Google Drive (OAuth2) credentials. Your Airtable (Personal Access Token) credentials. Your Ollama credentials. (To do this, create an "OpenAI API" credential, set the Base URL to your local server (e.g., http://localhost:11434/v1), and use ollama as the API Key). Follow the Notes: Click through the workflow and follow the numbered sticky notes ([1] to [6]) to connect your credentials and select your folders for each node. How to customize the workflow Use Cloud AI:** This template is flexible! You can easily swap the local Ollama models for a cloud provider (like OpenAI's GPT-4o or Anthropic's Claude 3). Just change the credentials and model name in the two AI nodes (OpenAI Chat Model and Data Cleaner). Add More Fields:** To extract more data (e.g., SupplierVATNumber), simply add the new field to the prompt in the Data Cleaner node and map it in the AirTable - Create a record1 node.
by Roshan Ramani
Replace BillyBot: Free Slack Employee Birthday & Anniversary Automation Who's it for HR teams, team leaders, and operations managers looking to automate employee celebrations without expensive third-party tools like BillyBot. Perfect for startups to enterprise teams wanting to save $600-2,400+ annually while maintaining personalized, engaging employee recognition. What it does This workflow automatically monitors your employee database daily and posts AI-generated, unique celebration messages to Slack for birthdays and work anniversaries. Unlike generic bots, it creates personalized messages that never repeat, rotating through 12 different styles and tones to keep celebrations fresh and authentic. How it works Daily Check: Runs every morning at 9 AM to scan your employee Google Sheet Smart Filtering: Matches today's date against employee birthdays and joining dates Data Aggregation: Collects all celebrating employees into a single payload AI Generation: Google Gemini creates unique, heartfelt messages with proper Slack formatting Auto-Post: Sends personalized celebrations directly to your chosen Slack channel The AI ensures no two messages feel templated, calculating years of service for anniversaries and adapting tone based on tenure length. Requirements Google Sheets** with employee data (columns: NO, Name, Email, Date of Birth, Joining Date in YYYY-MM-DD format) Slack workspace** with bot permissions to post messages Google Gemini API key** (free tier included) n8n Cloud** ($20/month) or self-hosted n8n (free) Cost comparison: Save $600-2,400+ per year BillyBot pricing: $1 per employee/month 50 employees = $600/year 100 employees = $1,200/year 200 employees = $2,400/year This solution: $0-20/month (unlimited employees) Google Gemini API: FREE Google Sheets API: FREE Slack API: FREE n8n: $20/month (Cloud) or $0 (self-hosted) Your savings: 95-100% cost reduction regardless of team size. Setup instructions Create Google Sheet: Add columns: NO, Name, Email, Date of Birth, Joining Date (ensure dates are YYYY-MM-DD format) Connect Google Sheets: Authenticate your Google account in the "Get row(s) in sheet" node Set up Slack: Create a Slack bot with chat:write permission and add to your celebration channel Configure Gemini: Add your Google Gemini API key to the "Google Gemini Chat Model" node Adjust Schedule: Change trigger time in "Schedule Trigger" node (default: 9 AM daily) Select Channel: Update Slack channel in "Send a message" node to your desired celebration channel Test: Run workflow manually to verify messages post correctly Customization options Change celebration time**: Modify the Schedule Trigger to any hour (e.g., 8 AM for morning celebrations) Adjust message tone**: Edit the AI Agent system prompt to match your company culture (formal, casual, playful) Multi-channel posting**: Duplicate the Slack node to post to multiple channels (e.g., company-wide + team-specific) Add upcoming reminders**: Modify the IF node to check for celebrations within 7 days Include photos**: Extend the workflow to pull employee photos from your HR system Custom emoji styles**: Update the AI prompt to use your organization's custom Slack emojis Key features 12 rotating message styles prevent repetition Automatic tenure calculation for work anniversaries Culturally inclusive and professional tone Mobile-optimized message length (1-3 lines) Slack markdown formatting for visual appeal Scales infinitely without additional cost Note: Ensure your Google Sheet date formats are consistent (YYYY-MM-DD) for accurate date matching. The workflow processes dates in MM-DD format to match across years automatically.
by Yar Malik (Asfandyar)
Who’s it for This template is for anyone who wants to manage tasks, deadlines, and updates directly from WhatsApp. It’s especially useful for teams, freelancers, and small businesses that track their work in Google Sheets and want quick AI-powered assistance without opening spreadsheets. How it works / What it does This workflow turns WhatsApp into your personal task manager. When a user sends a message, the AI agent (powered by OpenAI) interprets the request, retrieves or updates task information from Google Sheets, and sends a concise response back via WhatsApp. The workflow can highlight overdue tasks, upcoming deadlines, and provide actionable suggestions. How to set up Connect your WhatsApp API account in n8n. Add your OpenAI credentials. Link your Google Sheets document where tasks are stored. Deploy the workflow and test by sending a message to your WhatsApp number. Requirements WhatsApp Business API account connected to n8n OpenAI account for AI responses Google Sheets with task data How to customize the workflow Adjust the AI prompt to change tone or instructions. Modify the Google Sheets fields (Task, Status, Due Date, Notes) to match your structure. Add conditions or filters to customize which tasks get highlighted.
by Guillaume Duvernay
Create a Telegram bot that answers questions using Retrieval-Augmented Generation (RAG) powered by Lookio and an LLM agent (GPT-4.1). This template handles both text and voice messages (voice transcribed via a Mistral model by default), routes queries through an agent that can call a Lookio tool to fetch knowledge from your uploaded documents, and returns concise, Telegram-friendly replies. A security switch lets you restrict use to a single Telegram username for private testing, or remove the filter to make the bot public. Who is this for? Internal teams & knowledge workers**: Turn your internal docs into an interactive Telegram assistant for quick knowledge lookups. Support & ops**: Provide on-demand answers from your internal knowledge base without exposing full documentation. Developers & automation engineers**: Use this as a reference for integrating agents, transcription, and RAG inside n8n. No-code builders**: Quickly deploy a chat interface that uses Lookio for accurate, source-backed answers. What it does / What problem does this solve? Provides accurate, source-backed answers: Routes queries to **Lookio so replies are grounded in your documents instead of generic web knowledge. Handles voice & text transparently: Accepts Telegram voice messages, transcribes them (via the **Mistral API node by default), and treats transcripts the same as typed text. Simple agent + tool architecture: Uses a **LangChain AI Agent with a Query knowledge base tool to separate reasoning from retrieval. Privacy control: Includes a **Myself? filter to restrict access to a specific Telegram username for safe testing. How it works Trigger: Telegram Trigger receives incoming messages (text or voice). Route: Message Router detects voice vs text. Voice files are fetched with Get Audio File. Transcribe: Mistral transcribe receives the audio file and returns a transcript; the transcript or text is normalized into preset\_user\_message and consolidated in Consolidate user message. Agent: AI Agent (GPT-4.1-mini configured) runs with a system prompt that instructs it to call the Query knowledge base tool when domain knowledge is required. Respond: The agent output is sent back to the user via Telegram answer. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Add credentials in n8n: Configure Telegram API, OpenAI (or your LLM provider), and Mistral Cloud credentials in n8n. Configure Lookio tool: In the Query knowledge base node, replace <your-lookio-api-key> and <your-assistant-id> placeholders with your Lookio API Key and Assistant ID. Set Telegram privacy (optional): Edit the Myself? If node and replace <Replace with your Telegram username> with your username to restrict access. Remove the node to allow public use. Adjust transcription (optional): Swap the Mistral transcribe HTTP node for another provider (OpenAI, Whisper, etc.) and update its prompt to include your jargon list. Connect LLM: In OpenAI Chat Model node, add your OpenAI API key (or configure another LLM node) and ensure the AI Agent node references this model. Activate workflow: Activate the workflow and test by messaging your bot in Telegram. Requirements An n8n instance (cloud or self-hosted) A Telegram Bot token added in n8n credentials A Lookio account, API Key, and Assistant ID An LLM provider account (OpenAI or equivalent) for the OpenAI Chat Model node A Mistral API key (or other transcription provider) for voice transcription How to take it further Add provenance & sources**: Parse Lookio responses and include short citations or source links in the agent replies. Rich replies**: Use Telegram media (images, files) or inline keyboards to create follow-up actions (open docs, request feedback, escalate to humans). Multi-user access control**: Replace the single-username filter with a list or role-based access system (Airtable or Google Sheets lookup) to allow multiple trusted users. Logging & analytics: Save queries and agent responses to **Airtable or Google Sheets for monitoring, quality checks, and prompt improvement.
by Zakwan
Creating high-quality, SEO-friendly blog posts consistently can be time-consuming. This template helps content creators, bloggers, SEO specialists, and agencies fully automate their blogging workflow. By combining AI content generation (GPT), Google Sheets for keyword management, and WordPress for direct publishing, this workflow saves hours of manual work and ensures professional results. ⚡ Use Cases Automate content creation for niche blogs. Generate SEO-optimized articles from keyword lists. Keep a consistent publishing schedule without manual effort. Scale content production for agencies or affiliate sites. ✅ Pre-requirements Before using this template, you will need: Google Sheets API credentials (for managing topics & keywords). AI API key (e.g., OpenAI, LM Studio, Ollama, or any connected model). WordPress credentials with API access. Basic understanding of n8n workflow editor. 🔧 Step-by-Step Setup Connect Google Sheets: Add your API credentials.- Use the sheet to store keywords, titles, and categories. Integrate AI Model (GPT or others): Insert your API key into the AI node. Customize the SEO writing prompt for Rank Math 90+ score. Content Processing: The workflow will fetch one keyword at a time. AI will generate a 1200–1500+ word SEO blog post. Output is cleaned into proper HTML. Publish to WordPress: Configure the WordPress node with your site credentials. Automatically post as Draft or Published.