by Rodrigo
How it works This workflow creates a complete AI-powered restaurant ordering system through WhatsApp. It receives customer messages, processes multimedia content (text, voice, images, PDFs, location), uses GPT-4 to understand customer intent and manage conversations, handles the complete ordering flow from menu selection to payment verification, and sends formatted orders to restaurant staff. The system maintains conversation memory, verifies payment receipts using OCR, and provides automated responses in multiple languages. Who's it for Restaurant owners, food delivery services, and hospitality businesses looking to automate customer service and order management through WhatsApp without hiring additional staff. Requirements WhatsApp Business API account OpenAI API key (GPT-4/GPT-4o access recommended) Supabase account (for conversation memory and vector storage) Google Drive account (for menu images and QR codes) Google Maps API key (for location services) Gemini API key (for PDF processing) How to set up Configure credentials - Add your WhatsApp Business API, OpenAI, Supabase, Google Drive, and Gemini API credentials to n8n Update phone numbers - Replace [PHONE_NUMBER] placeholders with your actual restaurant and staff phone numbers Customize restaurant details - Replace [RESTAURANT_NAME], [RESTAURANT_OWNER_NAME], and [BANK_ACCOUNT_NUMBER] with your information Upload menu images - Add your menu images to Google Drive and update the file IDs Set up Supabase - Create tables for chat memory and upload your menu/restaurant information to the vector database Configure AI prompts - Update the restaurant information in the AI agent system messages Test the workflow - Send test messages to verify all integrations work How to customize the workflow Menu management**: Update Google Drive file IDs to display your current menu images Payment verification**: Modify the receipt analysis logic to match your bank's receipt format Order formatting**: Customize the order confirmation template sent to kitchen staff AI personality**: Adjust the restaurant agent's tone and responses in the system prompts Languages**: The AI supports multiple languages - customize welcome messages for your target market Business hours**: Add time-based logic to handle orders outside operating hours Delivery zones**: Integrate with your delivery area logic using the location processing features
by Don Jayamaha Jr
Coinbase AI Agent instantly fetches real-time market data directly in Telegram! This workflow integrates the Coinbase REST API with Telegram (plus optional AI-powered formatting) to deliver the latest crypto price, order book, candles, and trade stats in seconds. Perfect for crypto traders, analysts, and investors who want actionable market data at their fingertips—without API keys. How It Works A Telegram bot listens for user requests (e.g., BTC-USD). The workflow calls Coinbase public endpoints (no key required) to fetch real-time data: Latest price (ticker) 24h stats (open, high, low, close, volume) Order book snapshots (best bid/ask + depth) Candlestick data (OHLCV for multiple intervals) Recent trades (executed orders) A Calculator node derives useful values like mid-price and spread. An AI or “Think” node reshapes JSON into clear, human-readable messages. A splitter ensures long messages are broken into safe Telegram chunks. The final market insights are sent instantly back to Telegram. What You Can Do with This Agent This Telegram bot gives you: ✅ Get instant price & 24h stats for any Coinbase spot pair. ✅ Monitor live order books with top bids/asks. ✅ Analyze candle data (e.g., 15m, 1h, 4h, 1d). ✅ Track recent trades to see market activity. ✅ Receive clean, structured reports—optionally AI-enhanced. Set Up Steps Create a Telegram Bot Use @BotFather on Telegram to create your bot and get an API token. Configure in n8n Import the provided workflow JSON. Add your Telegram credentials (bot token + your Telegram ID for authentication). (Optional) Add an OpenAI key if you want AI-enhanced formatting. Deploy and Test Send a query like BTC-USD to your bot. Instantly receive Coinbase spot data in Telegram! 🚀 Unlock powerful, real-time Coinbase market insights directly in Telegram—no Coinbase API key required! 📺 Setup Video Tutorial Watch the full setup guide on YouTube: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: LinkedIn – Don Jayamaha
by Robert Breen
This workflow introduces beginners to one of the most fundamental concepts in n8n: looping over items. Using a simple use case—generating LinkedIn captions for content ideas—it demonstrates how to split a dataset into individual items, process them with AI, and collect the output for review or export. ✅ Key Features 🧪 Create Dummy Data**: Simulate a small dataset of content ideas. 🔁 Loop Over Items**: Process each row independently using the SplitInBatches node. 🧠 AI Caption Creation**: Automatically generate LinkedIn captions using OpenAI. 🧰 Tool Integration**: Enhance AI output with creativity-injection tools. 🧾 Final Output Set**: Collect the original idea and generated caption. 🧰 What You’ll Need ✅ An OpenAI API key ✅ The LangChain nodes enabled in your n8n instance ✅ Basic knowledge of how to trigger and run workflows in n8n 🔧 Step-by-Step Setup 1️⃣ Run Workflow Node**: Manual Trigger (Run Workflow) Purpose**: Manually start the workflow for testing or learning. 2️⃣ Create Random Data Node**: Create Random Data (Code) What it does**: Simulates incoming data with multiple content ideas. Code**: return [ { json: { row_number: 2, id: 1, Date: '2025-07-30', idea: 'n8n rises to the top', caption: '', complete: '' } }, { json: { row_number: 3, id: 2, Date: '2025-07-31', idea: 'n8n nodes', caption: '', complete: '' } }, { json: { row_number: 4, id: 3, Date: '2025-08-01', idea: 'n8n use cases for marketing', caption: '', complete: '' } } ]; 3️⃣ Loop Over Items Node**: Loop Over Items (SplitInBatches) Purpose**: Sends one record at a time to the next node. Why It Matters**: Loops in n8n are created using this node when you want to iterate over multiple items. 4️⃣ Create Captions with AI Node**: Create Captions (LangChain Agent) Prompt**: idea: {{ $json.idea }} System Message**: You are a helpful assistant creating captions for a LinkedIn post. Please create a LinkedIn caption for the idea. Model**: GPT-4o Mini or GPT-3.5 Credentials Required**: OpenAI Credential Go to: OpenAI API Keys Create a key and add it in n8n under credentials as “OpenAi account” 5️⃣ Inject Creativity (Optional) Node**: Tool: Inject Creativity (LangChain Tool) Purpose**: Demonstrates optional LangChain tools that can enhance or manipulate input/output. Why It’s Cool**: A great way to show chaining tools to AI agents. 6️⃣ Output Table Node**: Output Table (Set) Purpose**: Combines original ideas and generated captions into final structure. Fields**: idea: ={{ $('Create Random Data').item.json.idea }} output: ={{ $json.output }} 💡 Educational Value This workflow demonstrates: Creating dynamic inputs with the Code node Using SplitInBatches to simulate looping Sending dynamic prompts to an AI model Using Set to structure the output data Beginners will understand how item-level processing works in n8n and how powerful looping combined with AI can be. 📬 Need Help or Want to Customize This? Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags n8n loops OpenAI LangChain workflow training beginner LinkedIn automation caption generator
by Cheng Siong Chin
How It Works The workflow runs on a monthly trigger to collect both current-year and multi-year historical HDB data. Once fetched, all datasets are merged with aligned fields to produce a unified table. The system then applies cleaning and normalization rules to ensure consistent scales and comparable values. After preprocessing, it performs pattern mining, anomaly checks, and time-series analysis to extract trends and forecast signals. An AI agent, integrating OpenAI GPT-4, statistical tools, and calculator nodes, synthesizes these results into coherent insights. The final predictions are formatted and automatically written to Google Sheets for reporting and downstream use. Setup Steps 1) Configure fetch nodes to pull current-year HDB data and three years of historical records. 2) Align and map column names across all datasets. 3) Set normalization and standardization parameters in the cleaning node. 4) Add your OpenAI API key (GPT-4) and link the model, forecasting tool, and calculator nodes. 5) Authorize Google Sheets and configure sheet and cell mappings for automated export. Prerequisites Historical data source with API access (3+ years of records) OpenAI API key for GPT-4 model Google Sheets account with API credentials Basic understanding of time series data Use Cases Real Estate: Forecast property prices using multi-year historical HDB/market data with confidence intervals Finance: Predict market trends by aggregating years of transaction or pricing records Customization Data Source: Replace HDB/fetch nodes with stock prices, sensor data, sales records, or any historical dataset Analysis Window: Adjust years fetched (2-5 years) based on data availability and prediction horizon Benefits Automation: Monthly scheduling eliminates manual data gathering and analysis Consolidation: Merges fragmented year-by-year data into unified historical view
by Avinash Raju
How it works When a meeting ends in Fireflies, the transcript is automatically retrieved and sent to OpenAI for analysis. The AI evaluates objection handling, call effectiveness, and extracts key objections raised during the conversation. It then generates specific objection handlers for future calls. The analysis is formatted into a structured report and sent to both Slack for immediate visibility and Google Drive for centralized storage. Set up steps Prerequisites: Fireflies account with API access OpenAI API key Slack workspace Google Drive connected to n8n Configuration: Connect Fireflies webhook to trigger on meeting completion Add OpenAI API key in the AI analysis nodes Configure Slack channel destination for feedback delivery Set Google Drive folder path for report storage Adjust AI prompts in sticky notes to match your objection categories and sales methodology
by Jimmy Gay
🤖 Automated SEO Audit with a Team of AI Specialists This workflow performs a comprehensive, automated monthly SEO and performance audit for any website. It uses a "team" of specialized AI agents to analyze data from multiple sources, aggregates their findings, and generates a final strategic report. Every month, it automatically fetches data from Google Analytics, Google Search Console, and Google PageSpeed Insights, and also performs a live crawl of the target website's homepage. Key Features Fully Automated**: Runs on a schedule to deliver monthly reports without manual intervention. Multi-Source Analysis**: Gathers data from four key marketing sources for a 360° view. AI Agent Team**: Uses a sophisticated multi-agent system where each AI specializes in one area (Analytics, Performance, Technical SEO). Master Analyst**: A final AI agent synthesizes all specialist reports into a single, actionable strategic plan. Automated Storage**: All individual and final reports are automatically saved to a designated Google Sheet. ⚙️ Setup Instructions To use this template, you must configure your credentials and set your target website. 1. Set Your Target Domain (Crucial!): Find the Set Target Website node at the beginning of the workflow. In the "Value" field, replace https://www.your-website.com with the URL of the website you want to audit. This will update the URL across the entire workflow automatically. 2. Configure the Schedule Trigger: Click on the Schedule Trigger node to set when you want the monthly report to run. 3. Connect Your Google Credentials: Google Analytics**: Select your credential in the Get a report node. Google Search Console**: Select your credential in the Search Console (HTTP Request) node. Google Sheets*: Select your credential in *all Google Sheets nodes. Google PageSpeed API Key**: Go to the "Credentials" tab in n8n and create a new "Generic Credential" with the type "API Key - Query Param". Name it Google API Key. The "Parameter Name" must be key. Paste your PageSpeed API key into the "API Key" field. Go back to the PageSpeed Insight node, select "API Key - Query Param" for Authentication, and choose your new credential. 4. Connect OpenAI Credentials: This template uses multiple OpenAI Chat Model nodes. Configure each one with your OpenAI API key. 5. Set Your Google Sheet: In each of the Google Sheets nodes, replace the hardcoded "Document ID" with the ID of your own Google Sheet where you want to store the reports. 🔬 Workflow Explained Phase 1: Data Collection: The Schedule Trigger starts the workflow. Four parallel branches collect data from Google Analytics, PageSpeed Insights, Search Console, and a direct website crawl. Phase 2: Data Processing & Specialist Analysis: Each data source is processed by a dedicated Code node to format the data. The formatted data is then sent to a specialized AI agent (ANALYTICS SPECIALIST, PERFORMANCE SPECIALIST, etc.) for in-depth analysis. Phase 3: Report Aggregation: A Merge node waits for all four specialist reports to be completed. A DATA AGGREGATOR node then combines them into a single, comprehensive package. Phase 4: Master Synthesis & Storage: The final MASTER ANALYST agent receives the aggregated data and produces a high-level strategic summary with actionable recommendations. This final report is then saved to Google Sheets.
by Jose Luis Segura
Revolut Extracts Analyzer This n8n template processes Revolut statements, normalizes transactions, and uses AI to categorize expenses automatically. Use cases include detecting subscriptions, separating internal transfers, and building dashboards to track spending. How it works Get Categories from Supabase** Download & Transform** Loop Over Items** LLM Categorizer** Insert into Supabase** How to use Start with the manual trigger node or replace it with a schedule/webhook. Connect Google Drive to provide Revolut CSV files. Ensure Supabase has tables for transactions and categories. Extend with notifications, reports, or BI tools. Requirements Google Drive for CSV files Supabase tables for categories & transactions LLM provider (OpenAI/Gemini)
by Don Jayamaha Jr
Access live KuCoin Spot Market data instantly in Telegram! This workflow integrates the KuCoin REST API with Telegram and an optional GPT-4.1-mini formatter, delivering real-time insights like latest prices, 24h stats, order book depth, trades, and candlesticks — all structured into clean Telegram messages. 🔎 How It Works A Telegram Trigger listens for user commands. User Authentication validates the Telegram ID against an allowlist. A SessionId is generated from the chat ID to support memory across turns. The KuCoin AI Agent orchestrates API requests: 24h Stats → /api/v1/market/stats?symbol=BTC-USDT Order Book Depth → /api/v1/market/orderbook/level2_100?symbol=BTC-USDT Latest Price → /api/v1/market/orderbook/level1?symbol=BTC-USDT Best Bid/Ask → /api/v1/market/orderbook/level1?symbol=BTC-USDT Klines (Candles) → /api/v1/market/candles?symbol=BTC-USDT&type=15min&limit=20 Recent Trades → /api/v1/market/histories?symbol=BTC-USDT Average Price (via Ticker) → /api/v1/market/orderbook/level1?symbol=BTC-USDT Utility Tools process results: Calculator → spreads, % changes, averages. Think → reshapes JSON, selects fields, formats outputs. Message Splitter breaks outputs >4000 chars (Telegram limit). Final report is sent back via Telegram SendMessage in human-readable format. ✅ What You Can Do with This Agent Get 24h rolling statistics (open, high, low, close, last, volume). Retrieve full order book depth (20, 100 levels) or best bid/ask. Monitor real-time latest prices with spreads. Analyze candlestick data (OHLCV) across supported intervals. View recent public trades with price, size, side, and time. Use average price proxies from bid/ask + last trade. Receive structured Telegram reports — not raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its token. Configure in n8n Import KuCoin AI Agent v1.02.json. Update User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key. (Optional) Add KuCoin API key Deploy & Test Activate the workflow in n8n. Send a query like BTC-USDT to your bot. Instantly receive structured KuCoin Spot Market insights in Telegram. 📤 Output Rules Responses grouped into Price, 24h Stats, Order Book, Klines, Trades. No raw JSON (only human-readable summaries). No financial advice or predictions. Always fetch directly from KuCoin’s official API. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: If you want, I can also update embed links & thumbnails elsewhere to match this. ⚡ Unlock KuCoin Spot Market insights in Telegram — fast, reliable, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Feras Dabour
Video Generation with Telegram Bot and Gemini API – Auto-Post to TikTok, Instagram & Facebook This n8n workflow turns your Telegram messenger into a full video content pipeline: you send a text or voice idea to a Telegram bot, collaborate with an AI on the script and caption, let Gemini generate the video, and then automatically publish it to TikTok, Instagram and Facebook – all with status tracking and Telegram confirmations. What You Need to Get Started This workflow connects several external services. You’ll need: Telegram Bot API Key** Create a bot via Telegram’s BotFather and copy the bot token. This is used by the Listen for incoming events and other Telegram nodes. OpenAI API Key** Required for: Speech to Text (OpenAI Whisper) – to transcribe voice notes. The OpenAI Chat Model used inside the AI Agent. Google Gemini / Vertex (via n8n Gemini Node)** Used by the Generate a video node (model: veo-3.0-fast-generate-001) to create the video. Google Drive** The generated video is temporarily stored in a Google Drive folder via Upload video and later downloaded again for uploading to Blotato. Blotato API Key* Social media publishing layer: Uploads the video as media. Creates posts for TikTok, Instagram and Facebook. Exposes a status endpoint the workflow uses for polling. Google Sheets Access** (optional but included) Used by Save Prompt & Post-Text to log your prompts and captions. Once these credentials are configured in n8n, the workflow runs end-to-end from Telegram idea to multi-platform publishing. How the Workflow Operates – Step by Step 1. Input & Initial Processing (Telegram + Voice Handling) This phase listens to your messages and normalizes them into a single text input for the AI. | Node Name | Role in Workflow | | ------------------------------ | ------------------------------------------------------------------------------------------------------- | | Listen for incoming events | Telegram Trigger node that starts the workflow whenever your bot receives a message (text or voice). | | Voice or Text | Set node that structures the incoming payload and prepares a unified text field for downstream nodes. | | A Voice? | IF node that checks whether the incoming message is a voice note or plain text. | | Get Voice File | If voice is detected, this Telegram node downloads the audio file from Telegram. | | Speech to Text | Uses OpenAI Whisper to convert the voice note into a text transcript. | If you send text**: The workflow skips the voice download/transcription and goes directly to the AI agent with your original text. If you send a voice note**: The workflow downloads the file, runs it through Whisper in Speech to Text, and passes the resulting transcript onward. The output of this stage is always a clean text string representing your idea. 2. AI Core & Approval Loop (Script + Caption via AI) Here the AI designs the video prompt (for Gemini) and the social media caption (for all platforms), then iterates with you until you approve. | Node Name | Role in Workflow | | -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | AI Agent | Central logic agent. Takes your idea text and applies the system prompt to create a video prompt and social media caption. Handles revisions based on your feedback. | | OpenAI Chat Model | The LLM backing the agent (OpenAI Chat model). | | Window Buffer Memory | Memory buffer that stores recent turns, so the agent can keep context across your “make it shorter / more fun / more technical” requests. | | Send questions or proposal to user | Telegram node that sends the AI’s suggested prompt + caption back to you for review. | | Approved from user? | IF node that checks whether the agent’s output is the “approved” JSON (meaning you said “ok” / “approved”) or just a normal suggestion. | Video Prompt Assistant System Prompt (Internal Instructions) The AI Agent is configured with a system message like: > You are a video prompt assistant for Telegram. > > 1. Analyze the user’s message and create a video prompt. > 2. Create relevant social media text with hashtags. > 3. Present both to the user and ask for feedback. > 4. If the user requests changes, refine and present again. > 5. Only when the user says “approved” or “ok”, output a final JSON: > { "videoPrompt": "...", "socialMediaText": "..." } Key behavior: Before approval**: The agent always responds with human-readable suggestions (prompt + caption) and a follow-up question asking what to change. After approval**: The agent returns only JSON (no Markdown, no explanation) with: videoPrompt – to be sent to Gemini. socialMediaText – to be used as a caption on all platforms. JSON Parsing | Node Name | Role in Workflow | | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Parse AI Output | Code node that extracts videoPrompt and socialMediaText from the agent’s output. It is tolerant of different formats (e.g. JSON wrapped in markdown) and throws errors if required fields are missing. | If the AI output is approved JSON, this node returns two clean fields: videoPrompt socialMediaText These are used in all subsequent steps. 3. User Feedback & Logging Right after parsing the final JSON, the workflow informs you and logs the content. | Node Name | Role in Workflow | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Inform user about processing | Telegram node that tells you: “Okay. Your video is being prepared now. I’ll let you know as soon as it’s online.” | | Save Prompt & Post-Text | Google Sheets node that appends a new row containing the videoPrompt and socialMediaText to a sheet (e.g., for tracking which prompts/captions you used). | This gives you both visibility (confirmation message) and historical tracking of your content ideas. 4. Video Generation with Gemini Now the actual video is created using Google Gemini’s video model. | Node Name | Role in Workflow | | -------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | Generate a video | Gemini node (model models/veo-3.0-fast-generate-001) that uses videoPrompt to generate a vertical video (aspect ratio 9:16). | The videoPrompt from the AI agent is passed directly into this node. The resulting binary output is a generated short-form video suitable for TikTok, Reels and Stories. 5. Staging the Video in Google Drive To make the video accessible for Blotato, the workflow uses Google Drive as a simple file staging area. | Node Name | Role in Workflow | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Upload video | Uploads the generated video binary to a specific folder in Google Drive (n8n_videos). The file name is taken from the binary metadata or defaults to video.mp4. | | Download Video from Drive1 | Immediately downloads the uploaded video by file ID, this time as binary data suitable for passing into [Blotato]. | Using Drive ensures that the file is properly hosted and addressable when uploading to [Blotato]’s media API. 6. Uploading to [Blotato]and Creating Posts First, the video is sent to [Blotato] as a media asset. Then three separate posts are created for TikTok, Instagram, and Facebook. Media Upload | Node Name | Role in Workflow | | ----------------- | ------------------------------------------------------------------------------------------------------------------------- | | Upload media1 | [Blotato] media node. Uploads the binary video as a media asset and returns a public url used by all post creation nodes. | Platform-Specific Post Creation From the uploaded media, the workflow fans out into three branches: | Node Name | Platform | Role in Workflow | | ------------------------ | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | | Create TikTok post | TikTok | Creates a TikTok post using the media URL and socialMediaText as the caption. Also sets the flag that the video is AI-generated. | | Create post1 | Instagram | Creates an Instagram (or related) post linked to the [Blotato] account (e.g., aimaginate_xx), using the same socialMediaText and media URL. | | Create Facebook post | Facebook Page | Creates a Facebook post on your specified page, again using the shared caption and media URL. | At this point, all three platforms have an initial “post submission” created via [Blotato]. Next, the workflow tracks their publishing status. 7. Status Monitoring & Retry Loops Each platform has its own mini-loop that polls [Blotato]) until the post is either published or failed. TikTok Status Loop | Node Name | Role in Workflow | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | | Wait | Initial 5-second (or configured) pause after creating the TikTok post. | | Check Post Status | Blotato “get” operation that fetches the current status (published, in-progress, etc.) of the TikTok post by postSubmissionId. | | Published to TikTok? | IF node that checks if status === "published". | | Confirm publishing to Tiktok | Telegram node that notifies you when the TikTok post is live (often including a link or at least a confirmation text). | | In Progress? | IF node that checks if status === "in-progress". | | Give Blotat other 5s :) | Wait node that sleeps a bit before checking again. Feeds back into Published to TikTok? to create a polling loop. | | Send TikTok Error Message | Telegram node that informs you if the status is neither published nor in progress (i.e., a failure). | Instagram Status Loop | Node Name | Role in Workflow | | ----------------------------------- | ------------------------------------------------------------------------------------ | | Wait1 | Wait node after Create post1, giving Blotato time to process the Instagram post. | | Get post1 | Blotato “get” operation to read status of the Instagram post. | | Published to Instagram? | IF node checking if status === "published". | | Confirm publishing to Instagram | Telegram message with confirmation that your Instagram post is live. | | In Progress?1 | IF node checking if status === "in-progress". | | Give Blotat more time | Wait node that loops back to Published to Instagram? for another status check. | | Send Instagram Error Message | Telegram notification if the Instagram post fails. | Facebook Status Loop | Node Name | Role in Workflow | | ---------------------------------- | -------------------------------------------------------------------------------- | | Wait2 | Wait node after Create Facebook post (longer pause, e.g. 30 seconds). | | Get Facebook post | Blotato “get” operation to fetch Facebook post status. | | Published to Facebook? | IF node testing for status === "published". | | Confirm publishing to Facebook | Telegram notification that the Facebook post is online. | | In Progress?2 | IF node checking if the Facebook post is still in progress. | | Give Blotat other 5s :)2 | Wait node that loops back into Published to Facebook? for repeated checks. | | Send Facebook Error Message | Telegram notification if the Facebook post fails or ends in an unexpected state. | This structure ensures that each platform is monitored independently, with clear success or error messages delivered right into your Telegram chat. 🛠️ Personalizing Your Video Content Bot The real power of this workflow is how easy it is to adapt it to your own style, platforms, and preferences. 1. Tweak the AI Prompt & Behavior Where:* Inside the *AI Agent* node, in the *System Message (systemMessage) of the options. What you can change:** The tone (funny, educational, corporate, storytelling, etc.). The level of detail in the video prompt (camera moves, style, characters, environment). The caption structure (hook, value, CTA, hashtag strategy). Whether the agent should produce multiple variants or just one. You can also extend it, e.g.: Ask for multi-slide carousel prompts instead of a single video. Force language (e.g., always English, always German). Add platform-specific hints (e.g., stronger hooks for TikTok). 2. Change Video Model or Aspect Ratio Where:* *Generate a video** node. Options:** Swap the model (within the Gemini node) if you want higher quality or different behavior. Adjust the aspectRatio from 9:16 to 16:9 or 1:1 depending on your target platform. 3. Modify Which Platforms You Post To Where:** Blotato nodes: Create TikTok post Create post1 (Instagram) Create Facebook post You can: Disable or delete branches for platforms you don’t use. Add new accounts or platforms supported by Blotato. Use different captions per platform (e.g. shorter for TikTok, more detailed for Facebook) by adding extra AI formatting steps. 4. Adjust Wait Times and Retry Logic Where:** Wait and IF nodes: Wait, Wait1, Wait2, Give Blotat other 5s :), Give Blotat more time, Give Blotat other 5s :)2 What:** Increase or decrease retry intervals. Limit the number of loops (e.g. by adding a counter) if needed. Customize error messages in the Telegram nodes. 5. Extend Logging & Analytics Where:* *Save Prompt & Post-Text** (Google Sheets). Ideas:** Add more columns (timestamp, platform flags, target audience, campaign name). Write back publish status and final URLs after success. Use this sheet as a content inventory or analytics base. In short, this workflow gives you a full AI-powered video pipeline: Idea (text/voice) via Telegram Drafting & approval of video prompt + caption via OpenAI agent Video generation with Gemini (veo) Staging in Google Drive Auto-posting and status monitoring across TikTok, Instagram & Facebook via Blotato All communication and confirmations returned directly to Telegram And everything is fully editable so you can adapt it precisely to your personal brand and content strategy.
by Rully Saputra
Sign up for Decodo — get better pricing here Overview This workflow automatically collects the latest AI research papers using Decodo, extracts and summarizes PDFs with AI, stores insights in Google Sheets, and notifies users via Telegram. It turns complex academic research into structured, readable knowledge with zero manual effort. Who’s this for This template is ideal for: AI researchers and ML engineers tracking new papers Founders and product teams monitoring AI trends Content writers and analysts creating research-based content Educators, students, and newsletter creators Anyone who wants automated, summarized research without reading full papers. How it works / What it does A schedule trigger starts the workflow automatically Decodo fetches the latest AI research listings from arXiv reliably and at scale Article titles and PDF links are extracted and structured Each paper PDF is downloaded and converted to text An AI summarization chain generates concise, human-readable summaries Results are saved to Google Sheets as a research database A Telegram message notifies users when new summaries are available How to set up Add your Decodo API credentials (required) Connect your OpenAI / ChatGPT-compatible model for summarization Connect Google Sheets and choose your target spreadsheet Add your Telegram bot credentials and chat ID Adjust the schedule trigger if needed, then activate the workflow Requirements n8n (self-hosted required due to community node usage) Decodo community node** (web extraction) OpenAI or compatible AI model credentials Google Sheets account Telegram bot access ⚠️ Disclaimer: This workflow uses a community node and is supported on self-hosted n8n only. How to customize the workflow Change the arXiv category to track different research domains Modify the AI prompt to adjust summary length or tone Replace Google Sheets with another database or knowledge base Disable Telegram notifications if not needed Extend the workflow for SEO blogs, newsletters, or RAG pipelines
by Khairul Muhtadin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? Automation enthusiasts, content creators, or social media managers who post article-based threads to Bluesky and want to automate the process end-to-end. What problem is this solving? Manual content repackaging and posting can be repetitive and time-consuming. This workflow automates the process from capturing article URLs (via Telegram or RSS) to scraping content, transforming it into a styled thread, and posting on Bluesky platform. What this workflow does Listens on Telegram or fetches from RSS feeds (AI Trends, Machine Learning Mastery, Technology Review). Extracts content from URLs using JinaAI. Converts the article into a neat, scroll-stopping thread via LangChain + Gemini / OpenAI ChatGPT. Splits the thread into multiple posts. The first post is published with “Create a Post”, while subsequent posts are replies. Adds short delays between posting to avoid rate limits. Setup Add credentials for Telegram Bot API, JinaAI, Google Gemini, and Bluesky App Password. Add or customize RSS feeds if needed Test with a sample URL to validate posting sequence. How to customize Swap out RSS feeds or trigger sources. Modify prompt templates or thread formatting rules in the LangChain/Gemini node. Adjust wait times or content parsing logic. Replace Bluesky with another posting target if desired. Made by: Khaisa Studio Need customs workflows? Contact Me!
by Gilbert Onyebuchi
This workflow leverages n8n to automate LinkedIn content creation from start to finish. Upload an image and quote through a web form, and get a professionally designed post with AI-generated captions, ready to publish in seconds. Features Randomly selects from 6 professional design templates for visual variety Converts HTML designs to high-quality images (90-95% JPEG quality) Generates engaging captions using OpenAI's GPT models Built-in caption editor for customization before posting Direct publishing to LinkedIn profiles or company pages Auto-compresses images for optimal LinkedIn upload Prerequisites N8N Instance: A running n8n instance (cloud or self-hosted) OpenAI API: Active account with API access for caption generation LinkedIn Account: Profile or company page with API access Image Conversion API: HTML CSS to Image account Web Hosting: Platform to host the web form (Netlify, Vercel, or custom server) Setup Instructions 1. Deploy Web Form Download the provided web form template Host on your preferred platform Copy both webhook URLs from your n8n workflow Update form's webhook endpoints with your n8n URLs 2. Configure Image Conversion Sign up at htmlcsstoimage.com Get your API credentials (User ID + API Key) Add to HTTP Request node as Basic Auth credentials 3. Connect OpenAI API Create API key at OpenAI Platform In the ChatGPT HTTP Request node, add Header parameter: Key: Authorization Value: Bearer YOUR_API_KEY Recommended model: gpt-4 or gpt-3.5-turbo 4. Authenticate LinkedIn Create LinkedIn OAuth2 credential in n8n Follow the authentication flow and grant required permissions Select the credential in the "Create a post" LinkedIn node Choose post destination (personal profile or company page) 5. Test the Workflow Submit test data through the web form Monitor n8n execution panel for successful completion Verify image generation, caption quality, and LinkedIn posting Adjust settings as needed based on results Notes Processing time averages 10-20 seconds from upload to preview All 6 design templates are fully responsive and LinkedIn-optimized Caption editor allows full customization before publishing to LinkedIn For questions or issues, please contact me for consulting and support : Linkedin. 🔗 Test with sample data first. Access Web Form Template