by Parag Javale
The AI Blog Creator with Gemini, Replicate Image, Supabase Publishing & Slack is a fully automated content generation and publishing workflow designed for modern marketing and SaaS teams. It automatically fetches the latest industry trends, generates SEO-optimized blogs using AI, creates a relevant featured image, publishes the post to your CMS (e.g., Supabase or custom API), and notifies your team via Slack all on a daily schedule. This workflow connects multiple services NewsAPI, Google Gemini, Replicate, Supabase, and Slack into one intelligent content pipeline that runs hands-free once set up. ✨ Features 📰 Fetch Trending Topics — pulls the latest news or updates from your selected industry (via NewsAPI). 🤖 AI Topic Generation — Gemini suggests trending blog topics relevant to AI, SaaS, and Automation. 📝 AI Blog Authoring — Gemini then writes a full 1200-1500 word SEO-optimized article in Markdown. 🧹 Smart JSON Cleaner — A resilient code node parses Gemini’s output and ensures clean, structured data. 🖼️ Auto-Generated Image — Replicate’s Ideogram model creates a blog cover image based on the content prompt. 🌐 Automatic Publishing — Posts are automatically published to your Supabase or custom backend. 💬 Slack Notification — Notifies your team with blog details and live URL. ⏰ Fully Scheduled — Runs automatically every day at your preferred time (default 10 AM IST). ⚙️ Workflow Structure | Step | Node | Purpose | | ---- | ----------------------------------- | ----------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 10 AM | | 2 | Fetch Industry Trends (NewsAPI) | Retrieves trending articles | | 3 | Message a model (Gemini) | Generates trending topic ideas | | 4 | Message a model1 (Gemini) | Writes full SEO blog content | | 5 | Code in JavaScript | Cleans, validates, and normalizes Gemini output | | 6 | HTTP Request (Replicate) | Generates an image using Ideogram | | 7 | HTTP Request1 | Retrieves generated image URL | | 8 | Wait + If | Polls until image generation succeeds | | 9 | Edit Fields | Assembles blog fields into final JSON | | 10 | Publish to Supabase | Posts to your CMS | | 11 | Slack Notification | Sends message to your Slack channel | 🔧 Setup Instructions Import the Workflow in n8n and enable it. Create the following credentials: NewsAPI (Query Auth) — from https://newsapi.org Google Gemini (PaLM API) — use your Gemini API key Replicate (Bearer Auth) — API key from https://replicate.com/account Supabase (Header Auth) — endpoint to your /functions/v1/blog-api (set your key in header) Slack API — create a Slack App token with chat:write permission Edit the NewsAPI URL query parameter to match your industry (e.g., q=AI automation SaaS). Update the Supabase publish URL to your project endpoint if needed. Adjust the Slack Channel name under “Slack Notification”. (Optional) Change the Schedule Trigger time as per your timezone. 💡 Notes & Tips The Code in JavaScript node is robust against malformed or extra text in Gemini output — it sanitizes Markdown and reconstructs clean JSON safely. You can replace Supabase with any CMS or Webhook endpoint by editing the “Publish to Supabase” node. The Replicate model used is ideogram-ai/ideogram-v3-turbo — you can swap it with Stable Diffusion or another model for different aesthetics. Use the slug field in your blog URLs for SEO-friendly links. Test with one manual execution before activating scheduled runs. If Slack notification fails, verify the token scopes and channel permissions. 🧩 Tags #AI #Automation #ContentMarketing #BlogGenerator #n8n #Supabase #Gemini #Replicate #Slack #WorkflowAutomation
by Jaruphat J.
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package, pdfseparate from poppler-utils, and custom command execution. Make sure to install all required dependencies locally. Who is this for? This template is designed for developers, back-office teams, and automation builders (especially in Thailand or Thai-speaking environments) who need to process multi-file, multi-page Thai PDFs and automatically export structured results to Google Sheets. It is ideal for: Government and enterprise document processing Thai-language invoices, memos, and official letters AI-powered automation pipelines that require Thai OCR What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text, but integrating it into an end-to-end workflow usually requires manual scripting and handling multi-page PDFs. This template solves that by: Splitting PDFs into individual pages Running Typhoon OCR on each page Aggregating text back into a single file Using AI to extract structured fields Automatically saving structured data into Google Sheets What this workflow does Trigger:** Manual execution or any n8n trigger node Load Files:** Read PDFs from a local doc/multipage folder Split PDF Pages:** Use pdfinfo and pdfseparate to break PDFs into pages Typhoon OCR:** Run OCR on each page via Execute Command Aggregate:** Combine per-page OCR text LLM Extraction:** Use AI (e.g., GPT-4, OpenRouter) to extract fields into JSON Parse JSON:** Convert structured JSON into a tabular format Google Sheets:** Append one row per file into a Google Sheet Cleanup:** Delete temp split pages and move processed PDFs into a Completed folder Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr poppler-utils: provides pdfinfo, pdfseparate qpdf: backup page counting Create folders /doc/multipage for incoming files /doc/tmp for split pages /doc/multipage/Completed for processed files Google Sheet Create a Google Sheet with column headers like: book_id | date | subject | to | attach | detail | signed_by | signed_by2 | contact_phone | contact_email | contact_fax | download_url API Keys Export your TYPHOON_OCR_API_KEY and OPENAI_API_KEY (or use credentials in n8n) How to customize this workflow Replace the LLM provider in the “Structure Text to JSON with LLM” node (supports OpenRouter, OpenAI, etc.) Adjust the JSON schema and parsing logic to match your documents Update Google Sheets mapping to fit your desired fields Add trigger nodes (Dropbox, Google Drive, Webhook) to automate file ingestion About Typhoon OCR Typhoon is a multilingual LLM and NLP toolkit optimized for Thai. It includes typhoon-ocr, a Python OCR package designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multi-language documents in Southeast Asia. Deployment Option You can also deploy this workflow easily using the Docker image provided in my GitHub repository: https://github.com/Jaruphat/n8n-ffmpeg-typhoon-ollama This Docker setup already includes n8n, ffmpeg, Typhoon OCR, and Ollama combined together, so you can run the whole environment without installing each dependency manually.
by AFK Crypto
Try It Out! The AI Investment Research Assistant (Discord Summary Bot) transforms your Discord server into a professional-grade AI-driven crypto intelligence center. Running automatically every morning, it gathers real-time news, sentiment, and market data from multiple trusted sources — including NewsAPI, Crypto Compare, and CoinGecko — covering the most influential digital assets like BTC, ETH, SOL, BNB, and ADA. An AI Research Analyst Agent then processes this data using advanced reasoning and summarization to deliver a structured Market Intelligence Briefing. Each report distills key market events, sentiment shifts, price movements, and analyst-grade insights, all formatted into a visually clean and actionable message that posts directly to your Discord channel. Whether you’re a fund manager, community owner, or analyst, this workflow helps you stay informed about market drivers — without manually browsing dozens of news sites or data dashboards. Detailed Use Cases Crypto Research Teams:** Automate daily market briefings across key assets. Investment Communities:** Provide daily insights and sentiment overviews directly on Discord. Trading Desks:** Quickly review summarized market shifts and performance leaders. DAOs or Fund Analysts:** Centralize institutional-style crypto intelligence into your server. How It Works Daily Trigger (Schedule Node) – Activates each morning to begin data collection. News Aggregation Layer – Uses NewsAPI (and optionally CryptoPanic or GDELT) to fetch the latest crypto headlines and event coverage. Market & Sentiment Fetch – Collects market metrics via CoinGecko or Crypto Compare, including: 24-hour price change Market cap trend Social sentiment or Fear & Greed index AI Research Analyst (LLM Agent) – Processes and synthesizes all data into a cohesive insight report containing: 🧠 Executive Summary 📊 Top Gainers & Losers 💬 Sentiment Overview 🔍 Analyst Take / Actionable Insight Formatting Layer (Code Node) – Converts the analysis into a Discord-ready structure. Discord Posting Node – Publishes the final Market Intelligence Briefing to a specified Discord channel. Setup and Customization Import this workflow into your n8n workspace. Configure credentials: NewsAPI Key – For crypto and blockchain news. CoinGecko / Crypto Compare API Key – For real-time asset data. LLM Credential – OpenAI, Gemini, or Anthropic. Discord Webhook URL or Bot Token – To post updates. Customize the tracked assets in the News and Market nodes (BTC, ETH, SOL, BNB, ADA, etc.). Set local timezone for report delivery. Deploy and activate — your server will receive automated morning briefings. Output Format Each daily report includes: 📰 AI Market Intelligence Briefing 📅 Date: October 16, 2025 💰 Top Movers: BTC +2.3%, SOL +1.9%, ETH -0.8% 💬 Sentiment: Moderately Bullish 🔍 Analyst Take: Accumulation signals forming in mid-cap layer-1s. 📈 Outlook: Positive bias, with ETH showing strong support near $2,400. Compact yet rich in insight, this format ensures quick readability and fast decision-making for traders and investors. (Optional) Extend This Workflow Portfolio-Specific Insights:** Fetch your wallet holdings from AFK Crypto or Zapper APIs for personalized reports. Interactive Commands:** Add /compare or /analyze commands for Discord users. Multi-Language Summaries:** Auto-translate for international communities. Historical Data Logging:** Store briefings in Notion or Google Sheets. Weekly Recaps:** Summarize all daily reports into a long-form analysis. Requirements n8n Instance** (with HTTP Request, AI Agent, and Discord nodes enabled) NewsAPI Key** CoinGecko / Crypto Compare API Key** LLM Credential** (OpenAI / Gemini / Anthropic) Discord Bot Token or Webhook URL** APIs Used GET https://newsapi.org/v2/everything?q=crypto OR bitcoin OR ethereum OR defi OR nft&language=en&sortBy=publishedAt&pageSize=10 GET https://api.coingecko.com/api/v3/simple/price?ids=bitcoin,ethereum,solana&vs_currencies=usd&include_market_cap=true&include_24hr_change=true (Optional) GET https://cryptopanic.com/api/v1/posts/?auth_token=YOUR_TOKEN&kind=news (Optional) GET https://api.gdeltproject.org/api/v2/doc/doc?query=crypto&format=json Summary The AI Investment Research Assistant (Discord Summary Bot) is your personal AI research analyst — delivering concise, data-backed crypto briefings directly to Discord. It intelligently combines news aggregation, sentiment analysis, and AI reasoning to create actionable market intelligence each morning. Ideal for crypto traders, funds, or educational communities seeking a reliable daily edge — this workflow replaces hours of manual research with one automated, professional-grade summary. Our Website: https://afkcrypto.com/ Check our blogs: https://www.afkcrypto.com/blog
by Nasser
Who’s it for? Content Creators E-commerce Stores Marketing Team Description: Generate unique UGC images for your products. Simply upload a product image into a Google Drive folder, and the workflow will instantly generate 50 unique, high-quality AI UGC images using Nano Banana via Fal.ai. All results are automatically saved back into the same folder, ready to use across social media, e-commerce stores, and marketing campaigns. How it works? 📺 YouTube Video Tutorial: 1 - Trigger: Upload a new Product Image (with white background) to a Folder in your Google Drive 2 - Generate 50 different Image Prompts for your Product 3 - Loop over each Prompt Generated 4 - Generate UGC Content thanks to Fal.ai (Nano Banana) 5 - Upload UGC Content on the initial Google Drive Folder Cost: 0.039$ / image== How to set up? 1. Accounts & APIs In the Edit Field "Setup" Node replace all ==[YOUR_API_TOKEN]== with your API Token : Fal.ai (gemini-25-flash-image/edit): https://fal.ai/models/fal-ai/gemini-25-flash-image/edit/api In Credentials on your n8n Dashboard, connect the following accounts using ==Client ID / Secret==: Google Drive: https://docs.n8n.io/integrations/builtin/credentials/google/ 2. Requirements Base Image of your Product preferably have a White Background Your Google Drive Folder and every Files it contains should be publicly available 3. Customizations Change the amount of total UGC Generated: In Generate Prompts → Message → "Your task is to generate 50" Modify the instructions to generate the UGC Prompts: In Generate Prompts → Message Change the amount of Base Image: In Generate Image → Body Parameters → JSON → image_urls Change the amount of UGC Generated per prompt: In Generate Image → Body Parameters → JSON → num_images Modify the Folder where UGC Generated are stored: In Upload File → Parent Folder
by Gegenfeld
AI Background Removal Workflow This workflow automatically removes backgrounds from images stored in Airtable using the APImage API 🡥, then downloads and saves the processed images to Google Drive. Perfect for batch processing product photos, portraits, or any images that need clean, transparent backgrounds. The source (Airtable) and the storage (Google Drive) can be changed to any service or database you want/use. 🧩 Nodes Overview 1. Remove Background (Manual Trigger) This manual trigger starts the background removal process when clicked. Customization Options: Replace with Schedule Trigger for automatic daily/weekly processing Replace with Webhook Trigger to start via API calls Replace with File Trigger to process when new files are added 2. Get a Record (Airtable) Retrieves media files from your Airtable "Creatives Library" database. Connects to the "Media Files" table in your Airtable base Fetches records containing image thumbnails for processing Returns all matching records with their thumbnail URLs and metadata Required Airtable Structure: Table with image/attachment field (currently expects "Thumbnail" field) Optional fields: File Name, Media Type, Upload Date, File Size Customization Options: Replace with Google Sheets, Notion, or any database node Add filters to process only specific records Change to different tables with image URLs 3. Code (JavaScript Processing) Processes Airtable records and prepares thumbnail data for background removal. Extracts thumbnail URLs from each record Chooses best quality thumbnail (large > full > original) Creates clean filenames by removing special characters Adds processing metadata and timestamps Key Features: // Selects best thumbnail quality if (thumbnail.thumbnails?.large?.url) { thumbnailUrl = thumbnail.thumbnails.large.url; } // Creates clean filename cleanFileName: (record.fields['File Name'] || 'unknown') .replace(//g, '_') .toLowerCase() Easy Customization for Different Databases: Product Database**: Change field mappings to 'Product Name', 'SKU', 'Category' Portfolio Database**: Use 'Project Name', 'Client', 'Tags' Employee Database**: Use 'Full Name', 'Department', 'Position' 4. Split Out Converts the array of thumbnails into individual items for parallel processing. Enables processing multiple images simultaneously Each item contains all thumbnail metadata for downstream nodes 5. APImage API (HTTP Request) Calls the APImage service to remove backgrounds from images. API Endpoint: POST https://apimage.org/api/ai-remove-background Request Configuration: Header**: Authorization: Bearer YOUR_API_KEY Body**: image_url: {{ $json.originalThumbnailUrl }} ✅ Setup Required: Replace YOUR_API_KEY with your actual API key Get your key from APImage Dashboard 🡥 6. Download (HTTP Request) Downloads the processed image from APImage's servers using the returned URL. Fetches the background-removed image file Prepares image data for upload to storage 7. Upload File (Google Drive) Saves processed images to your Google Drive in a "bg_removal" folder. Customization Options: Replace with Dropbox, OneDrive, AWS S3, or FTP upload Create date-based folder structures Use dynamic filenames with metadata Upload to multiple destinations simultaneously ✨ How To Get Started Set up APImage API: Double-click the APImage API node Replace YOUR_API_KEY with your actual API key Keep the Bearer prefix Configure Airtable: Ensure your Airtable has a table with image attachments Update field names in the Code node if different from defaults Test the workflow: Click the Remove Background trigger node Verify images are processed and uploaded successfully 🔗 Get your API Key 🡥 🔧 How to Customize Input Customization (Left Section) Replace the Airtable integration with any data source containing image URLs: Google Sheets** with product catalogs Notion** databases with image galleries Webhooks** from external systems File system** monitoring for new uploads Database** queries for image records Output Customization (Right Section) Modify where processed images are stored: Multiple Storage**: Upload to Google Drive + Dropbox simultaneously Database Updates**: Update original records with processed image URLs Email/Slack**: Send processed images via communication tools Website Integration**: Upload directly to WordPress, Shopify, etc. Processing Customization Batch Processing**: Limit concurrent API calls Quality Control**: Add image validation before/after processing Format Conversion**: Use Sharp node for resizing or format changes Metadata Preservation**: Extract and maintain EXIF data 📋 Workflow Connections Remove Background → Get a Record → Code → Split Out → APImage API → Download → Upload File 🎯 Perfect For E-commerce**: Batch process product photos for clean, professional listings Marketing Teams**: Remove backgrounds from brand assets and imagery Photographers**: Automate background removal for portrait sessions Content Creators**: Prepare images for presentations and social media Design Agencies**: Streamline asset preparation workflows 📚 Resources APImage API Documentation 🡥 Airtable API Reference 🡥 n8n Documentation 🡥 ⚡ Processing Speed: Handles multiple images in parallel for fast batch processing 🔒 Secure: API keys stored safely in n8n credentials 🔄 Reliable: Built-in error handling and retry mechanisms
by Neeraj Chouhan
Good to know: This workflow creates a WhatsApp chatbot that answers questions using your own PDFs through RAG (Retrieval-Augmented Generation). Every time you upload a document to Google Drive, it is processed into embeddings and stored in Pinecone—allowing the bot to respond with accurate, context-aware answers directly on WhatsApp. Who is this for? Anyone building a custom WhatsApp chatbot. Businesses wanting a private knowledge based assistant Teams that want their documents to be searchable via chat Creators/coaches who want automated Q&A from their PDFs Developers who want a no-code RAG pipeline using n8n What problem is this workflow solving? What this workflow does: ✅ Monitors a Google Drive folder for new PDFs ✅ Extracts and splits text into chunks ✅ Generates embeddings using OpenAI/Gemini ✅ Stores embeddings in a Pinecone vector index ✅ Receives user questions via WhatsApp ✅ Retrieves the most relevant info using vector search ✅ Generates a natural response using an AI Agent ✅ Sends the answer back to the user on WhatsApp How it works: 1️⃣ Google Drive Trigger detects a new or updated PDF 2️⃣ File is downloaded and its text is split into chunks 3️⃣ Embeddings are generated and stored in Pinecone 4️⃣ WhatsApp Trigger receives a user’s question 5️⃣ The question is embedded and matched with Pinecone 6️⃣ AI Agent uses retrieved context to generate a response 7️⃣ The message is delivered back to the user on WhatsApp How to use: Connect your Google Drive account Add your Pinecone API key and index name Add your OpenAI/Gemini API key Connect your WhatsApp trigger + sender nodes Upload a sample PDF to your Drive folder Send a test WhatsApp message to see the bot reply Requirements: ✅ n8n cloud or self-hosted ✅ Google Drive account ✅ Pinecone vector database ✅ OpenAI or Gemini API key ✅ WhatsApp integration (Cloud API or provider) Customizing this workflow: 🟢 Change the Drive folder or add file-type filters 🟢 Adjust chunk size or embedding model 🟢 Modify the AI prompt for tone, style, or restrictions 🟢 Add memory, logging, or analytics 🟢 Add multiple documents or delete old vector entries 🟢 Swap the AI model (OpenAI ↔ Gemini ↔ Groq, etc.)
by Toshiki Hirao
Managing invoices manually can be time-consuming and error-prone. This workflow automates the process by extracting key invoice details from PDFs shared in Slack, structuring the information with AI, saving it to Google Sheets, and sending a confirmation back to Slack. It’s a seamless way to keep your financial records organized without manual data entry. How it works Receive invoice in Slack – When a PDF invoice is uploaded to a designated Slack channel, the workflow is triggered. Fetch the PDF – The file is downloaded automatically for processing. Extract data from PDF – Basic text extraction is performed to capture invoice content. AI-powered invoice parsing – An AI model interprets the extracted text and structures essential fields such as company name, invoice number, total amount, invoice date, and due date. Save to Google Sheets – The structured invoice data is appended as a new row in a Google Sheet for easy tracking and reporting. Slack confirmation – A summary of the saved invoice details is sent back to Slack to notify the team. How to use Import the workflow into your n8n instance. Connect Slack – Authenticate your Slack account and set up the trigger channel where invoices will be uploaded. Connect Google Sheets – Authenticate with Google Sheets and specify the target spreadsheet and sheet name. Configure the AI extraction – Adjust the parsing prompt or output structure to fit your preferred data fields (e.g., vendor name, invoice ID, amount, dates). Test the workflow – Upload a sample invoice PDF in Slack and verify that the data is correctly extracted and saved to Google Sheets. Requirements An n8n instance (cloud) Slack account with permission to read uploaded files and post messages Google account with access to the spreadsheet you want to update AI integration (e.g., OpenAI GPT or another LLM with PDF parsing capabilities) A designated Slack channel for receiving invoice PDFs
by n8n Automation Expert | Template Creator | 2+ Years Experience
Description 🎯 Overview An advanced automated trading bot that implements ICT (Inner Circle Trader) methodology and Smart Money Concepts for cryptocurrency trading. This workflow combines AI-powered market analysis with automated trade execution through Coinbase Advanced Trading API. ⚡ Key Features 📊 ICT Trading Strategy Implementation Kill Zone Detection**: Automatically identifies optimal trading sessions (Asian, London, New York kill zones) Smart Money Concepts**: Analyzes market structure breaks, liquidity grabs, fair value gaps, and order blocks Session Validation**: Real-time GMT time tracking with session strength calculations Structure Analysis**: Detects BOS (Break of Structure) and CHOCH (Change of Character) patterns 🤖 AI-Powered Analysis GPT-4 Integration**: Advanced market analysis using OpenAI's latest model Confidence Scoring**: AI generates confidence scores (0-100) for each trading signal Risk Assessment**: Automated risk level evaluation (LOW/MEDIUM/HIGH) ICT-Specific Prompts**: Custom prompts designed for Inner Circle Trader methodology 🔄 Automated Trading Flow Signal Reception: Receives trading signals via Telegram webhook Data Extraction: Parses symbol, action, price, and technical indicators Session Validation: Verifies current kill zone and trading session strength Market Data: Fetches real-time data from Coinbase Advanced Trading API AI Analysis: Processes signals through GPT-4 with ICT-specific analysis Quality Filter: Multi-condition filtering based on confidence, session, and structure Trade Execution: Automated order placement through Coinbase API Documentation: Records all trades and rejections in Notion databases 📱 Multi-Platform Integration Telegram Bot**: Receives signals and sends formatted notifications Coinbase Advanced**: Real-time market data and trade execution Notion Database**: Comprehensive trade logging and analysis tracking Webhook Support**: External system integration capabilities 🛠️ Setup Requirements API Credentials Needed: Coinbase Advanced Trading API** (API Key, Secret, Passphrase) OpenAI API Key** (GPT-4 access) Telegram Bot Token** and Chat ID Notion Integration** (Database IDs for trade records) Environment Variables: TELEGRAM_CHAT_ID=your_chat_id NOTION_TRADING_DB_ID=your_trading_database_id NOTION_REJECTED_DB_ID=your_rejected_signals_database_id WEBHOOK_URL=your_external_webhook_url 📈 Trading Logic Kill Zone Priority System: London & New York Sessions**: HIGH priority (0.9 strength) Asian & London Close**: MEDIUM priority (0.6 strength) Off Hours**: LOW priority (0.1 strength) Signal Validation Criteria: Signal quality must not be "LOW" Confidence score ≥ 60% Active kill zone session required ICT structure alignment confirmed 🎛️ Workflow Components Extract ICT Signal Data: Parses incoming Telegram messages for trading signals ICT Session Validator: Determines current kill zone and session strength Get Coinbase Market Data: Fetches real-time cryptocurrency data ICT AI Analysis: GPT-4 powered analysis with ICT methodology Parse ICT AI Analysis: Processes AI response with fallback mechanisms ICT Quality & Session Filter: Multi-condition signal validation Execute ICT Trade: Automated trade execution via Coinbase API Create ICT Trading Record: Logs successful trades to Notion Generate ICT Notification: Creates formatted Telegram alerts Log ICT Rejected Signal: Records filtered signals for analysis 🚀 Use Cases Automated ICT-based cryptocurrency trading Smart Money Concepts implementation Kill zone session trading AI-enhanced market structure analysis Professional trading documentation and tracking ⚠️ Risk Management Built-in session validation prevents off-hours trading AI confidence scoring filters low-quality signals Comprehensive logging for performance analysis Automated stop-loss and take-profit calculations This workflow is perfect for traders familiar with ICT methodology who want to automate their Smart Money Concepts trading strategy with AI-enhanced decision making.
by Yaron Been
Comprehensive SEO Strategy with O3 Director & GPT-4 Specialist Team Trigger When chat message received → User submits an SEO request (e.g., “Help me rank for project management software”). The message goes straight to the SEO Director Agent. SEO Director Agent (O3) Acts like the head of SEO strategy. Uses the Think node to plan and decide which specialists to call. Delegates tasks to relevant agents. Specialist Agents (GPT-4.1-mini) Each agent has its own OpenAI model connection for lightweight cost-efficient execution. Tasks include: Keyword Research Specialist → Keyword discovery, clustering, competitor analysis. SEO Content Writer → Generates optimized blog posts, landing pages, etc. Technical SEO Specialist → Site audit, schema markup, crawling fixes. Link Building Strategist → Backlink strategies, outreach campaign ideas. Local SEO Specialist → Local citations, GMB optimization, geo-content. Analytics Specialist → Reports, performance insights, ranking metrics. Feedback Loop Each agent sends results back to the SEO Director. Director compiles insights into a comprehensive SEO campaign plan. ✅ Why This Setup Works Well O3 Model for Director** → Handles reasoning-heavy orchestration (strategy, delegation). GPT-4.1-mini for Specialists** → Cheap, fast, task-specific execution. Parallel Execution** → All specialists can run at the same time. Scalable & Modular** → You can add/remove agents depending on campaign needs. Sticky Notes** → Already document the workflow (great for onboarding & sharing).
by Anirudh Aeran
This workflow is a complete, AI-powered content engine designed to help automation experts build their personal brand on LinkedIn. It transforms a technical n8n workflow (in JSON format) into a polished, engaging LinkedIn post, complete with a custom-generated AI image and a strategic call-to-action. This system acts as your personal content co-pilot, handling the creative heavy lifting so you can focus on building, not just writing. Who’s it for? This template is for n8n developers, automation consultants, and tech content creators who want to consistently showcase their work on LinkedIn but lack the time or desire to write marketing copy and design visuals from scratch. If you want to turn your projects into high-quality content with minimal effort, this is your solution. How it works This workflow is divided into two main parts that work together through Telegram: Content Generation & Image Creation: You send an n8n workflow's JSON file to your first Telegram bot. The workflow sends the JSON to Google Gemini with a sophisticated prompt, instructing it to analyze the workflow and write a compelling LinkedIn post in one of two high-engagement styles ("Builder" or "Strategist"). Gemini also generates a detailed prompt for an AI image model, including a specific headline to be embedded in the visual. This image prompt is then sent to the Cloudflare Workers AI model to generate a unique, high-quality image for your post. The final image and the AI-generated text prompt are sent back to you via Telegram for review. Posting to LinkedIn: You use a second Telegram bot for publishing. Simply reply to the image you received from the first bot with the final, polished post text. The workflow triggers on your reply, grabs the image and the text, and automatically publishes it as a new post on your LinkedIn profile. Why Two Different Workflows? The first workflow sends you the image and the post content. You can make changes in the content or the image and send the image to BOT-2. Then copy the post content send it to BOT-2 as a reply to the image. Then both the image and Content will be posted on LinkedIn as a single post. How to set up Create Two Telegram Bots: You need two separate bots. Use BotFather on Telegram to create them and get their API tokens. Bot 1 (Generator): For submitting JSON and receiving the generated content/image. Bot 2 (Publisher): For replying to the image to post on LinkedIn. (After Human Verification) Set Up Accounts & Credentials: Add credentials for Google Gemini, Cloudflare (with an API Token), Google Sheets, and LinkedIn. For Cloudflare, you will also need your Account ID. Google Sheet for Tracking: Create a Google Sheet with the columns: Keyword, Image Prompt, Style Used to keep a log of your generated content. Configure Nodes: In all Telegram nodes, select the correct credential for each bot. In the Google Gemini node, ensure your API credential is selected. In the Cloudflare nodes ("Get accounts" and "Get Flux Schnell image"), select your Cloudflare credential and replace the placeholder with your Account ID in the URL. In the LinkedIn node, select your credential and choose the author (your profile). In the Google Sheets node, enter your Sheet ID. Activate: Activate both Telegram Triggers in the workflow. Requirements An n8n instance. Credentials for: Google Gemini, Cloudflare, LinkedIn, Google Sheets. Two Telegram bots with their API tokens. A Cloudflare Account ID.
by Takuya Ojima
Who’s it for Teams that monitor multiple news sources and want an automated, tagged, and prioritized briefing—PMM, PR/Comms, Sales/CS, founders, and research ops. What it does / How it works Each morning the workflow reads your RSS feeds, summarizes articles with an LLM, assigns tags from a maintained dictionary, saves structured records to Notion, and posts a concise Slack digest of top items. Core steps: Daily Morning Trigger → Workflow Configuration (Set) → Read RSS Feeds → Get Tag Dictionary → AI Summarizer and Tagger → Parse AI Output → Write to Notion Database → Sort by Priority → Top 3 Headlines → Format Slack Message → Post to Slack. How to set up Open Workflow Configuration (Set) and edit: rssFeeds (array of URLs), notionDatabaseId, slackChannel. Connect your own credentials in n8n for Notion, Slack, Google Sheets (if used for the tag dictionary), and your LLM provider. Adjust the trigger time in Daily Morning Trigger (e.g., weekdays at 09:00). Requirements n8n (Cloud or self-hosted) Slack app with chat:write to the target channel Notion database with properties: summary (rich_text), tags (multi_select), priority (number), url (url), publishedDate (date) Optional Google Sheet for the tag dictionary (or replace with another source) How to customize the workflow Scoring & selection: Change priority rules, increase “Top N” items, or sort by recency. Taxonomy: Extend the tag dictionary; refine the AI prompt for stricter tagging. Outputs: Post per-tag Slack threads, send DMs, or create Notion relations to initiatives. Sources: Add more feeds or mix in APIs/newsletters. Security: Do not hardcode API keys in HTTP nodes; keep credentials in n8n.
by Avkash Kakdiya
How it works This workflow starts whenever a new domain is added to a Google Sheet. It cleans the domain, fetches traffic insights from SimilarWeb, extracts the most relevant metrics, and updates the sheet with enriched data. Optionally, it can also send this information to Airtable for further tracking or analysis. Step-by-step Trigger on New Domain Workflow starts when a new row is added in the Google Sheet. Captures the raw URL/domain entered by the user. Clean Domain URL Strips unnecessary parts like http://, https://, www., and trailing slashes. Stores a clean domain format (e.g., example.com) along with the row number. Fetch Website Analysis Uses the SimilarWeb API to pull traffic and engagement insights for the domain. Data includes global rank, country rank, category rank, total visits, bounce rate, and more. Extract Key Metrics Processes raw SimilarWeb data into a simplified structure. Extracted insights include: Ranks: Global, Country, and Category. Traffic Overview: Total Visits, Bounce Rate, Pages per Visit, Avg Visit Duration. Top Traffic Sources: Direct, Search, Social. Top Countries (Top 3): With traffic share percentages. Device Split: Mobile vs Desktop. Update Google Sheet Writes the cleaned and enriched domain data back into the same (or another) Google Sheet. Ensures each row is updated with the new traffic insights. Export to Airtable (Optional) Creates a new record in Airtable with the enriched traffic metrics. Useful if you want to manage or visualize company/domain data outside of Google Sheets. Why use this? Automatically enriches domain lists with live traffic data from SimilarWeb. Cleans messy URLs into a standard format. Saves hours of manual research on company traffic insights. Provides structured, comparable metrics for better decision-making. Flexible: update sheets, export to Airtable, or both.