by Jonas Frewert
Blog Post Content Creation (Multi-Topic with Brand Research, Google Drive, and WordPress) Description This workflow automates the full lifecycle of generating and publishing SEO-optimized blog posts from a list of topics. It reads topics (and optionally brands) from Google Sheets, performs brand research, generates a structured HTML article via AI, converts it into an HTML file for Google Drive, publishes a draft post on WordPress, and repeats this for every row in the sheet. When the final topic has been processed, a single Slack message is sent to confirm completion and share links. How It Works 1. Input from Google Sheets A Google Sheets node reads rows containing at least: Brand (optional, can be defaulted) Blog Title or Topic A Split In Batches node iterates through the rows one by one so each topic is processed independently. 2. Configuration The Configuration node maps each row’s values into: Brand Blog Title These values are used consistently across brand research, content creation, file naming, and WordPress publishing. 3. Brand Research A Language Model Chain node calls an OpenRouter model to gather background information about the brand and its services. The brand context is used as input for better, on-brand content generation. 4. Content Creation A second Language Model Chain node uses the brand research and the blog title or topic to generate a full-length, SEO-friendly blog article. Output is clean HTML with: Exactly one `` at the top Structured ` and ` headings Semantic tags only No inline CSS No <html> or <body> wrappers No external resources 5. HTML Processing A Code node in JavaScript: Strips any markdown-style code fences around the HTML Normalizes paragraph breaks Builds a safe file name from the blog title Encodes the HTML as a binary file payload 6. Upload to Google Drive A Google Drive node uploads the generated HTML file to a specified folder. Each topic creates its own HTML file, named after the blog title. 7. Publish to WordPress An HTTP Request node calls the WordPress REST API to create a post. The post content is the generated HTML, and the title comes from the Configuration node. By default, the post is created with status draft (can be changed to publish if desired). 8. Loop Control and Slack Notification After each topic is processed (Drive upload and WordPress draft), the workflow loops back to Split In Batches to process the next row. When there are no rows left, an IF node detects that the loop has finished. Only then is a single Slack message sent to: Confirm that all posts have been processed Share links to the last generated Google Drive file and WordPress post Integrations Used OpenRouter - AI models for brand research and SEO content generation Google Sheets - Source of topics and (optionally) brands Google Drive - Storage for generated HTML files WordPress REST API - Blog post creation (drafts or published posts) Slack - Final summary notification when the entire batch is complete Ideal Use Case Content teams and agencies managing a queue of blog topics in a spreadsheet Marketers who want a hands-off pipeline from topic list to WordPress drafts Teams who need generated HTML files stored in Drive for backup, review, or reuse Any workflow where automation should handle the heavy lifting and humans only review the final drafts Setup Instructions Google Sheets Create a sheet with columns like Brand and Blog Title or Topic. In the Get Blog Topics node, set the sheet ID and range to match your sheet. Add your Google Sheets credentials in n8n. OpenRouter (LLM) Add your OpenRouter API key as credentials. In the OpenRouter Chat Model nodes, select your preferred models and options if you want to customize behavior. Google Drive Add Google Drive credentials. Update the folder ID in the Upload file node to your target directory. WordPress In the Publish to WordPress node, replace the example URL with your site’s REST API endpoint. Configure authentication (for example, Application Passwords or Basic Auth). Adjust the status field (draft or publish) to match your desired workflow. Slack Add Slack OAuth credentials. Set the channel ID in the Slack node where the final summary message should be posted. Run the Workflow Click Execute Workflow. The workflow will loop through every row in the sheet, generating content, saving HTML files to Drive, and creating WordPress posts. When all rows have been processed, a single Slack notification confirms completion.
by Nguyen Thieu Toan
🤖 Build a customer service AI chatbot for Facebook Messenger with Google Gemini 📌 Overview A streamlined Facebook Messenger chatbot powered by AI with conversation memory. This is a simplified version designed for quick deployment, learning, and testing — not suitable for production environments. Base workflows: Smart message batching AI-powered Facebook Messenger chatbot use Data Table Smart human takeover & auto pause AI-powered Facebook Messenger chatbot 🎯 What This Workflow Does ✅ Core Features: Receives messages from Facebook Messenger via webhook Processes user messages with Google Gemini AI Maintains conversation context using Simple Memory node Automatically responds with AI-generated replies Handles webhook verification for Facebook setup Send image or video to customer through Facebook Messenger 🔹 Simplified Approach: Memory**: Simple Memory node (10-message window) Format**: Cleans text, strips markdown, truncates >1900 chars Response**: Single message delivery ⚠️ Limitations & Trade-offs: No Smart Batching → fragmented user messages cause spam-like replies No Human Takeover Detection → bot continues even when admin joins Basic Memory Management → no persistence, not reliable in production Basic Text Formatting → strips markdown, truncates brutally, no smart splitting 🚀 When to Upgrade Upgrade to full workflows when you need: Production deployment with reliability & persistence Analytics & tracking (query history, reports) Professional formatting (bold, italic, lists, code blocks) Handling long messages (>2000 chars) Smart batching for fragmented inputs Human handoff detection Full conversation persistence Key upgrades available: Smart message batching workflow Smart human takeover workflow ⚙️ Setup Requirements Facebook Setup Create Facebook App at developers.facebook.com Add Messenger product Configure webhook: URL: https://your-domain.com/webhook/your-path Verify token: secure string Subscribe to: messages, messaging_postbacks Generate Page Access Token Copy token to "Set Context" node n8n Setup Import workflow Edit "Set Context" node → update page_access_token Configure "Gemini Flash" node credentials Deploy workflow (must be publicly accessible) 🔄 How It Works User Message → Facebook Webhook → Validation ↓ Set Context (extract user_id, message, token) ↓ Mark Seen → Show Typing ↓ AI Agent (Gemini + 10-message memory) ↓ Format Output (remove markdown, truncate) ↓ Send Response via Facebook API 🏗️ Architecture Overview Section 1: Webhook & Initial Processing Facebook Webhook: handles GET (verification) & POST (messages) Confirm Webhook: returns challenge / acknowledges receipt Filters text messages only Blocks echo messages from bot itself Section 2: AI Processing with Memory Set Context: extracts user_id, message, token Seen & Typing: user feedback Conversation Memory: 10-message window, per-user isolation Process Merged Message: AI Agent with Jenix persona Gemini Flash: Google’s AI model for response generation Section 3: Format & Delivery Cuts replies >2000 chars, strips markdown Sends text via Facebook Graph API 🎨 Customisation Guide Bot Personality**: edit system prompt in "Process Merged Message" node Memory**: adjust contextWindowLength (default 10), change sessionKey if needed AI Model**: replace Gemini Flash with OpenAI, Anthropic Claude, or other LLMs 📌 Important Notes ⚠️ Production Warning: testing only, memory lost on n8n restart in queue mode 📊 No Analytics: no history storage, no reporting 🔧 Format Limitations: responses ≤1800 chars, markdown stripped, no complex formatting 🛠️ Troubleshooting Bot not responding** → check token, webhook accessibility, event subscriptions Memory not working** → verify session key, ensure not in queue mode, restart workflow Messages truncated** → adjust system prompt for conciseness, reduce response length 📜 License & Credits Created by: Nguyễn Thiệu Toàn (Jay Nguyen) Email: me@nguyenthieutoan.com Website: nguyenthieutoan.com n8n Creator: n8n.io/creators/nguyenthieutoan Company: GenStaff
by Davide
This workflow implements an AI-powered design and prototyping assistant that integrates Telegram, Google Gemini, and Google Stitch (MCP) to enable conversational UI generation and project management. Supported actions include: Creating new design projects Retrieving existing projects Listing projects and screens Fetching individual screens Generating new UI screens directly from text descriptions Key Advantages 1. ✅ Conversational Design Workflow Design and UI prototyping can be driven entirely through natural language. Users can create screens, explore layouts, or manage projects simply by chatting, without opening design tools. 2. ✅ Tight Integration with Google Stitch By leveraging the Stitch MCP API, the workflow provides direct access to structured design capabilities such as screen generation, project management, and UI exploration, avoiding manual API calls or custom scripting. 3. ✅ Intelligent Tool Selection The AI agent does not blindly call APIs. It first analyzes the user request, determines the required level of fidelity and intent, and then selects the most appropriate Stitch function or combination of functions. 4. ✅ Multi-Channel Support The workflow supports both generic chat triggers and Telegram, making it flexible for internal tools, demos, or production chatbots. 5. ✅ Security and Access Control Telegram access is restricted to a specific user ID, and execution only happens when a dedicated command is used. This prevents accidental or unauthorized usage. 6. ✅ Context Awareness with Memory The inclusion of conversational memory allows the agent to maintain context across interactions, enabling iterative design discussions rather than isolated commands. 7. ✅ Production-Ready Output Formatting Responses are automatically converted into Telegram-compatible HTML, ensuring clean, readable, and well-formatted messages without manual post-processing. 8. ✅ Extensible and Modular Architecture The workflow is highly modular: additional Stitch tools, AI models, or communication channels can be added with minimal changes, making it future-proof and easy to extend. How It Works This workflow functions as a Telegram-powered AI agent that leverages Google Stitch's MCP (Model Context Protocol) tools for design, UI generation, and product prototyping. It combines conversational AI, tool-based actions, and web search capabilities. Trigger & Authorization: The workflow is activated by an incoming message from a configured Telegram bot. A code node first checks the sender's Telegram User ID against a hardcoded value (xxx) to restrict access. Only authorized users can proceed. Command Parsing: An IF node filters messages, allowing the agent to proceed only if the message text starts with the command /stitch. This ensures the agent is only invoked intentionally. Query Preparation: The /stitch prefix is stripped from the message text, and the cleaned query, along with the user's ID (used as a session identifier), is passed to the main agent. AI Agent Execution: The core "Google Stitch Agent" node is an LLM-powered agent (using Google Gemini) equipped with: Tools: Access to several Google Stitch MCP functions (create_project, get_project, list_projects, list_screens, get_screen, generate_screen_from_text) and a Perplexity web search tool. Memory: A conversation buffer window to maintain context within a session. System Prompt: Instructs the agent to intelligently select and use the appropriate Stitch tools based on the user's design-related request (e.g., generating screens from text, managing projects). It is directed to use web search when necessary for additional context. Response Processing & Delivery: The agent's text output (in Markdown) is passed through another LLM chain ("From MD to HTML") that converts it to Telegram-friendly HTML. Finally, the formatted response is sent back to the user via the Telegram bot. Set Up Steps To make this workflow operational, you need to configure credentials and update specific nodes: Telegram Bot Configuration: In the "Code" node (id: 08bfae9e...), replace xxx in the condition $input.first().json.message.from.id !== xxx with your actual Telegram User ID. This ensures only you can trigger the agent. Ensure the "Telegram Trigger" and "Send a text message" nodes have valid Telegram Bot credentials configured. Google Stitch API Setup: Obtain an API key from Google Stitch. Configure the HTTP Header Auth credential named "Google Stitch" (referenced by all MCP tool nodes: Create Project, Get Project, etc.). Set the Header Auth with: Name: X-Goog-Api-Key Value: Your actual Google Stitch API Key (YOUR-API-KEY). AI Model & Tool Credentials: Verify the credentials for the Google Gemini Chat Model nodes are correctly set up for API access. Verify the credentials for the Perplexity API node ("Search on web") are configured if web search functionality is required. Activation: Once all credentials are configured, set the workflow to Active. The Telegram webhook will be registered, and the workflow will listen for authorized messages containing the /stitch command. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Incrementors
Description Connect Fireflies to this workflow once and every meeting you record becomes a LinkedIn post draft automatically. The moment Fireflies finishes transcribing a call, it fires a signal to the workflow — which fetches the full transcript, extracts real insights, and uses GPT-4o-mini to write a 180–280 word scroll-stopping post with a hook, key learnings, and hashtags. The finished draft is saved to Google Drive and previewed in Slack so you can review and publish when ready. Built for founders, consultants, and sales leaders who want a consistent LinkedIn presence without spending time writing from scratch after every call. What This Workflow Does Triggers automatically when a call ends** — Fireflies sends a signal the moment transcription completes, so no manual input is ever needed Validates every incoming signal** — Checks that the signal contains a valid meeting ID and silently discards invalid or test pings Extracts real meeting insights** — Pulls speaker dialogue, Fireflies-detected pricing and question sentences, keywords, overview, and sentiment from the full transcript Writes a structured LinkedIn post** — GPT-4o-mini produces a hook, a specific insight paragraph, 3–5 emoji learnings, a closing question, and hashtags — all grounded in your actual meeting content Saves a complete Google Doc** — Stores the post alongside meeting reference details, participants, keywords, action items, and a link back to the Fireflies transcript Previews the post in Slack** — Posts the first 350 characters of the draft to your Slack channel so your team can review before the post goes live Exits cleanly for incomplete transcripts** — If Fireflies hasn't finished processing yet, the workflow stops silently without errors Setup Requirements Tools Needed n8n instance (self-hosted or cloud) Fireflies.ai account with webhook access OpenAI account with GPT-4o-mini API access Google Drive (one folder where posts will be saved) Slack workspace with OAuth2 app configured Credentials Required Fireflies API key (pasted directly into 5. Set — Config Values) OpenAI API key Google Drive OAuth2 Slack OAuth2 Estimated Setup Time: 15–20 minutes Step-by-Step Setup Import the workflow — Open n8n → Workflows → Import from JSON → paste the workflow JSON → click Import Activate the workflow and copy the webhook URL — Toggle the workflow to Active → click on node 1. Webhook — Fireflies Transcript Done → copy the webhook URL shown Register the webhook in Fireflies — Log in to fireflies.ai → go to Settings → Developer Settings → Webhooks → paste the webhook URL → save Get your Fireflies API key — In Fireflies, go to Settings → Integrations → copy your API key Fill in Config Values — Open node 5. Set — Config Values → replace all placeholders: | Field | What to enter | |---|---| | YOUR_FIREFLIES_API_KEY | Your Fireflies API key from step 4 | | YOUR_GOOGLE_DRIVE_FOLDER_ID | The folder ID from your Google Drive URL (the string after /folders/ in the URL when you open the folder) | | #content-team | Your Slack channel name including the # | | YOUR FULL NAME | The author's full name (used in the post sign-off) | | YOUR JOB TITLE | The author's job title (e.g. CEO, SEO Consultant) | | YOUR COMPANY NAME | Your company name (used in the AI prompt) | Connect OpenAI — Open node 11. OpenAI — GPT-4o-mini Model → click the credential dropdown → add your OpenAI API key → test the connection Connect Google Drive — Open node 13. Google Drive — Save LinkedIn Post → click the credential dropdown → add Google Drive OAuth2 → sign in with your Google account → authorize access Connect Slack — Open node 14. Slack — Send Post Preview → click the credential dropdown → connect your Slack workspace via OAuth2 → invite the n8n bot to your channel in Slack (/invite @n8n) > ⚠️ The workflow must be Active before registering the webhook in Fireflies. An inactive workflow will not receive signals from Fireflies. Activate first, then paste the URL. How It Works (Step by Step) Step 1 — Webhook: Fireflies Transcript Done This step listens for a signal from Fireflies. Every time Fireflies finishes transcribing a meeting, it sends a POST request to this webhook URL containing the meeting ID. No manual trigger is needed — it fires automatically after every recorded call. Step 2 — Code: Extract Meeting ID The meeting ID is extracted from the incoming signal. Fireflies can send the payload in several different formats, so this step checks all possible locations and pulls the ID safely. If no meeting ID is found at all, a flag is set to mark the signal as invalid. Step 3 — IF: Valid Meeting ID? This is the first gate check. If a valid meeting ID was found (YES path), the workflow continues to fetch the transcript. If the signal was invalid or contained no meeting ID (NO path), the workflow routes to 4. Set — Invalid Webhook Skip and stops cleanly. Step 4 — Set: Invalid Webhook Skip This step handles the invalid signal case. It sets a brief message confirming the webhook was skipped and the workflow ends here for that trigger. Step 5 — Set: Config Values Your Fireflies API key, Google Drive folder ID, Slack channel, author name, author title, and company name are stored here. The validated meeting ID from step 2 is also carried forward so the transcript fetch can use it directly. Step 6 — HTTP: Fetch Transcript A request is sent to the Fireflies API using your API key and the meeting ID. It retrieves the complete transcript including all sentences with speaker labels, AI-detected pricing and task sentences, keyword summary, overview, gist, bullet points, and sentiment percentages. Step 7 — Code: Process Transcript Data The raw transcript is processed into clean, usable fields. All sentences are combined into a readable text block (limited to 5,000 characters for GPT efficiency). Fireflies-flagged pricing sentences, question sentences, and task sentences are extracted separately. Sentiment percentages, keywords, action items, and overview are all pulled out. A formatted document title is generated automatically using the meeting name and date. If the transcript is empty or not yet available, a flag is set for the next gate check. Step 8 — IF: Transcript Ready? This is the second gate check. If transcript data is available (YES path), the workflow moves to AI post writing. If Fireflies hasn't finished processing the transcript yet (NO path), the workflow routes to 9. Set — Transcript Not Ready Skip and stops cleanly without errors. Step 9 — Set: Transcript Not Ready Skip This step handles the not-ready case. It logs the meeting ID and a message confirming the transcript was skipped. The workflow ends here for that run. Step 10 — AI Agent: Write LinkedIn Post GPT-4o-mini receives the author details, meeting context, Fireflies summary, bullet points, keywords, action items, questions raised in the call, and the transcript excerpt. It writes a 180–280 word LinkedIn post following a fixed structure: a scroll-stopping hook (not starting with "I" or "We"), a specific insight paragraph in first person, 3–5 emoji key learnings pulled from real transcript content, a closing question or call to action, and 4–5 hashtags. A sign-off with the author's name and title is added at the end. Step 11 — OpenAI: GPT-4o-mini Model This is the language model powering the writing step. It runs at temperature 0.8 for creative, varied output and is capped at 700 tokens to keep the post within the target word count. Step 12 — Code: Build Doc and Slack Message The AI-generated post is assembled into a complete Google Doc with the post text at the top, followed by meeting reference details: title, date, duration, participants, Fireflies transcript link, keywords, and action items. A Slack preview is also built here — the first 350 characters of the post plus meeting details and a link back to the Fireflies transcript. Step 13 — Google Drive: Save LinkedIn Post The complete document is saved to your specified Google Drive folder. The file is named automatically using the meeting title and date (e.g. "LinkedIn Post — Client Strategy Call — 14 Apr 2025"). Step 14 — Slack: Send Post Preview The preview message is posted to your Slack channel at the same time the Google Doc is being saved. Your team sees the post hook and first paragraph instantly, with the full document link available in Drive for review before publishing. Key Features ✅ Fully automatic — zero manual trigger — Fireflies fires the workflow the moment any call transcript is ready, no human action needed ✅ Two validation gates — Invalid webhook signals and unready transcripts both exit cleanly without causing errors or empty posts ✅ Grounded in real content — The AI prompt feeds actual transcript sentences, keywords, bullet points, and action items so posts are specific, not generic ✅ Fixed post structure every time — Hook, insight paragraph, emoji learnings, closing CTA, hashtags, and sign-off are enforced on every run ✅ Auto-named Google Docs — Files are named by meeting title and date automatically so your Drive folder stays organized without any manual renaming ✅ Slack preview before publishing — Your team sees the draft before it goes live — one review step, no surprises ✅ Handles all Fireflies payload formats — The extraction step checks every possible payload structure so the webhook never silently fails due to a format change ✅ Temperature tuned for creative writing — GPT runs at 0.8 so each post has a natural, human tone rather than a repetitive AI pattern Customisation Options Change the post length target — In node 10. AI Agent — Write LinkedIn Post, edit the instruction from "180 to 280 words" to a different range. Also adjust maxTokens in node 11. OpenAI — GPT-4o-mini Model accordingly (e.g. set to 900 for longer posts). Add a second post format — After node 10. AI Agent — Write LinkedIn Post, add a second AI Agent step with a different prompt structure (e.g. a short 3-sentence insight post or a carousel-style numbered list) to generate two post options per call instead of one. Route posts by meeting type — In node 5. Set — Config Values, add a postCategory field. Then add an IF check after step 7 that reads the meeting title — if it contains "demo" or "sales", use a sales-focused prompt; if it contains "team" or "internal", use a thought leadership prompt. Save to a dated subfolder in Drive — In node 12. Code — Build Doc and Slack Message, generate a folder path string using the meeting date (e.g. 2025/April) and use the Google Drive step to create or find that subfolder before saving, keeping your Drive organized by month automatically. Add a Notion database entry — After node 13. Google Drive — Save LinkedIn Post, add a Notion API HTTP request to create a new row in a content calendar database with the post title, meeting date, status (Draft), and Google Drive link for content planning visibility. Troubleshooting Workflow not triggering when a call ends: Confirm the workflow is Active before expecting Fireflies to fire it — inactive workflows do not receive webhooks Log in to Fireflies → Settings → Developer Settings → Webhooks → confirm the webhook URL is saved correctly and matches the URL from node 1. Webhook — Fireflies Transcript Done Check that your Fireflies plan includes webhook support — some plans restrict this feature Fireflies API key error or empty transcript: Confirm YOUR_FIREFLIES_API_KEY in node 5. Set — Config Values is replaced with your actual key — not the placeholder text Get your key from fireflies.ai → Settings → Integrations → API Key If the transcript returns empty, the call may not have been processed yet by Fireflies — the workflow exits cleanly via 9. Set — Transcript Not Ready Skip in this case OpenAI not generating the post: Confirm the API key is connected in node 11. OpenAI — GPT-4o-mini Model and your account has available credits Check the execution log of node 10. AI Agent — Write LinkedIn Post for the raw error message If the post is under 50 characters, node 12. Code — Build Doc and Slack Message catches this and outputs a failure message instead of a broken doc Google Drive not saving the file: Confirm the Google Drive OAuth2 credential in node 13. Google Drive — Save LinkedIn Post is connected and not expired — re-authorize if needed Check that YOUR_GOOGLE_DRIVE_FOLDER_ID in node 5. Set — Config Values is the folder ID from your Drive URL, not the full URL — copy only the string after /folders/ Make sure the Google account you authorized has write access to the target folder Slack preview not arriving: Confirm the Slack OAuth2 credential in node 14. Slack — Send Post Preview is connected and authorized Check that the channel name in node 5. Set — Config Values includes the # prefix and matches your Slack channel exactly Type /invite @n8n in the target Slack channel to ensure the bot has permission to post Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by Akshay
Overview This project is an AI-powered WhatsApp virtual receptionist built using n8n, designed to handle both text and voice-based customer messages automatically. The workflow integrates Google Gemini, Pinecone, and the WhatsApp Business API to provide intelligent, context-aware responses that feel natural and professional. How It Works Message Detection The workflow begins when a message arrives on WhatsApp. It identifies whether the message is text or voice and routes it accordingly. Voice Message Handling Audio messages are securely downloaded from WhatsApp. The files are converted to Base64 format and sent to the Gemini API for transcription. The transcribed text is then passed to the AI Agent for further processing. AI Agent Processing The LangChain AI Agent acts as the brain of the system. It uses: Google Gemini Chat Model** for natural language understanding and response generation. Pinecone Vector Store** to retrieve company-specific information and product data. Memory Buffer** to remember the last 20 user messages, ensuring context-aware responses. The agent also follows a set of custom communication rules — replying only in approved languages, skipping greetings, and focusing on direct, helpful, and professional responses (e.g., product recommendations, support, or guidance). Knowledge Retrieval The AI Agent connects to a Pinecone database containing detailed company data, such as product catalogs or service FAQs. Using Gemini-generated embeddings, it retrieves the most relevant information for each user query. Response Delivery Once the AI Agent prepares the response, it is instantly sent back to the user via WhatsApp, completing the conversational loop. Who It’s For This system is ideal for businesses seeking to automate their customer communication through WhatsApp. It’s especially valuable for: Product-based companies** with frequent customer inquiries. Service providers** offering 24/7 customer assistance or quote requests. SMBs** looking to scale their communication without hiring additional staff. Tech Stack & Requirements n8n** – Workflow automation and orchestration. WhatsApp Cloud API** – For sending and receiving messages. Google Gemini (PaLM)** – For LLM-based transcription and response generation. Pinecone** – Vector database for product and service knowledge retrieval. LangChain Integration** – For connecting memory, vector store, and reasoning tools. Custom Business Rules** – Configurable within the AI Agent node to manage tone, style, and workflow behavior. Key Features Handles both text and voice messages seamlessly. Responds in multiple languages, including English. Maintains conversation memory per user session. Retrieves accurate company-specific information using vector search. Fully automated, with customizable behavior for different industries or use cases. Setup Instructions 1. Prerequisites Before importing the workflow, ensure you have: An active n8n instance (self-hosted or n8n Cloud). WhatsApp Cloud API credentials** from Meta. Google Gemini API key** with model access (for chat and transcription). Pinecone API key** with a preconfigured vector index containing your company data. 2. Environment Setup Install all required credentials under Settings → Credentials in n8n. Add environment variables (if applicable) for keys like: GOOGLE_API_KEY=your_google_gemini_key PINECONE_API_KEY=your_pinecone_key WHATSAPP_ACCESS_TOKEN=your_whatsapp_token 3. Pinecone Configuration Create a Pinecone index named, for example, products-index. Upload company documents or product details as vector embeddings using Gemini or LangChain utilities. Adjust the retrieval limit in the Pinecone node settings for broader or narrower search responses. 4. WhatsApp API Configuration Set up a WhatsApp Business Account via Meta Developer Dashboard. Create a webhook endpoint URL (n8n’s public URL) to receive WhatsApp messages. Use the WhatsApp Trigger Node to capture messages in real time. 5. AI Agent Customization You can personalize how the AI behaves by editing the system prompt inside the AI Agent node: Modify tone, response length, or product focus. Add new “rules” for language preferences or conversation flow. Include links or custom text output (e.g., quotation formats, product catalog messages). 6. Handling Voice Messages Ensure your WhatsApp Business Account has media message permissions enabled. Verify the HTTP Request node that connects to the Gemini API for transcription is correctly authenticated. You can adjust the transcription model or prompt if you prefer shorter, keyword-based outputs. 7. Testing Send both text and voice messages from a test WhatsApp number. Check response time and message formatting. Use n8n’s execution logs to debug errors (especially for media downloads or API credentials). Customization Options 🧩 AI Behavior Modify the AI Agent’s system message to adapt tone and personality (e.g., sales-oriented, support-driven). Update memory length (default: last 20 messages) for longer or shorter conversations. 🌍 Multi-language Support Add or remove allowed languages in the rules section of the AI Agent node. For multilingual businesses, duplicate the AI Agent path and route messages by language detection. 📦 Industry Adaptation Swap the Pinecone dataset to suit different industries — retail, hospitality, logistics, etc. Replace product data with FAQs, customer records, or support documentation.
by Cojocaru David
This n8n template demonstrates how to automatically generate and publish blog posts using trending keywords, AI-generated content, and watermarked stock images. Use cases include maintaining an active blog with fresh SEO content, scaling content marketing without manual writing, and automating the full publishing pipeline from keyword research to WordPress posting. Good to know At time of writing, each AI content generation step will incur costs depending on your OpenAI pricing plan. Image search is powered by Pexels, which provides free-to-use stock images. The workflow also applies a watermark for branding. Google Trends data may vary by region, and results depend on availability in your selected location. How it works The workflow begins with a scheduled trigger that fetches trending keywords from Google Trends. The XML feed is converted to JSON and filtered for relevant terms, which are logged into a Google Sheet for tracking. One random keyword is selected, and OpenAI is used to generate blog content around it. A structured output parser ensures the text is clean and well-formatted. The system then searches Pexels for a matching image, uploads it, adds metadata for SEO, and applies a watermark. Finally, the complete article (text and image) is published directly to WordPress. How to use The schedule trigger is provided as an example, but you can replace it with other triggers such as webhooks or manual inputs. You can also customize the AI prompt to match your niche, tone, or industry focus. For higher volumes, consider adjusting the keyword filtering and batching logic. Requirements OpenAI account for content generation Pexels API key for stock image search Google account with Sheets for keyword tracking WordPress site with API access for publishing Customising this workflow This automation can be adapted for different use cases. Try adjusting the prompts for technical blogs, fashion, finance, or product reviews. You can also replace the image source with other providers or integrate your own media library. The watermark feature ensures branding, but it can be modified or removed depending on your needs.
by DIGITAL BIZ TECH
Travel Reimbursement - OCR & Expense Extraction Workflow Overview This is a lightweight n8n workflow that accepts chat input and uploaded receipts, runs OCR, stores parsed results in Supabase, and uses an AI agent to extract structured travel expense data and compute totals. Designed for zero retention operation and fast integration. Workflow Structure Frontend:** Chat UI trigger that accepts text and file uploads. Preprocessing:** Binary normalization + per-file OCR request. Storage:** Store OCR-parsed blocks in Supabase temp_table. Core AI:** Travel reimbursement agent that extracts fields, infers missing values, and calculates totals using the Calculator tool. Output:** Agent responds to the chat with a concise expense summary and breakdowns. Chat Trigger (Frontend) Trigger node:** When chat message received public: true, allowFileUploads: true, sessionId used to tie uploads to the chat session. Custom CSS + initial messages configured for user experience. Binary Presence Check Node:** CHECK IF BINARY FILE IS PRESENT OR NOT (IF) Checks whether incoming payload contains files. If files present -> route to Split Out -> NORMALIZE binary file -> OCR (ANY OCR API) -> STORE OCR OUTPUT -> Merge. If no files -> route directly to Merge -> Travel reimbursement agent. Binary Normalization Node:** Split Out and NORMALIZE binary file (Code) Split Out extracts binary entries into a data field. NORMALIZE binary file picks the first binary key and rewrites payload to binary.data for consistent downstream shape. OCR Node:** OCR (ANY OCR API ) (HTTP Request) Sends multipart/form-data to OCR endpoint, expects JSONL or JSON with blocks. Body includes mode=single, output_type=jsonl, include_images=false. Store OCR Output Node:** STORE OCR OUTPUT (Supabase) Upserts into temp_table with session_id, parsed blocks, and file_name. Used by agent to fetch previously uploaded receipts for same session. Memory & Tooling Nodes:** Simple Memory and Simple Memory1 (memoryBufferWindow) Keep last 10 messages for session context. Node:** Calculator1 (toolCalculator) Used by agent to sum multiple charges, handle currency arithmetic and totals. Travel Reimbursement Agent (Core) Node:** Travel reimbursement agent (LangChain agent) Model: Mistral Cloud Chat Model (mistral-medium-latest) Behavior: Parse OCR blocks and non-file chat input. Extract required fields: vendor_name, category, invoice_date, checkin_date, checkout_date, time, currency, total_amount, notes, estimated. When fields are missing, infer logically and mark estimated: true. Use Calculator tool to sum totals across multiple receipts. Fetch stored OCR entries from Supabase when user asks for session summaries. Always attempt extraction; never reply with "unclear" or ask for a reupload unless user requests audit-grade precision. Final output: Clean expense table and Grand Total formatted for chat. Data Flow Summary User sends chat message plus or minus file. IF file present -> Split Out -> Normalize -> OCR -> Store OCR output -> Merge with chat payload. Travel reimbursement agent consumes merged item, extracts fields, uses Calculator tool for sums, and replies with a formatted expense summary. Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | LLM for agent | Mistral account | | Supabase | Store parsed OCR blocks and session data | Supabase account | | OCR API | Text extraction from images/PDFs | Configurable HTTP endpoint | | n8n Core | Flow control, parsing, editing | Native | Agent System Prompt Summary > You are a Travel Expense Extraction and Calculation AI. Extract vendor, dates, currency, category, and total amounts from uploaded receipts, invoices, hotel bills, PDFs, and images. Infer values when necessary and mark them as estimated. When asked, fetch session entries from Supabase and compute totals using the Calculator tool. Respond in a concise business professional format with a category wise breakdown and a Grand Total. Never reply "unclear" or ask for a reupload unless explicitly asked. Required final response format example: Key Features Zero retention friendly design: OCR output stored only to temp_table per session. Robust extraction with inference when OCR quality is imperfect. Session aware: agent retrieves stored receipts for consolidated totals. Calculator integration for accurate numeric sums and currency handling. Configurable OCR endpoint so you can swap providers without changing logic. Setup Checklist Add Mistral Cloud and Supabase credentials. Configure OCR endpoint to accept multipart uploads and return blocks. Create temp_table schema with session_id, file, file_name. Test with single receipts, multipage PDFs, and mixed uploads. Validate agent responses and Calculator totals. Summary A practical n8n workflow for travel expense automation: accept receipts, run OCR, store parsed data per session, extract structured fields via an AI agent, compute totals, and return clean expense summaries in chat. Built for reliability and easy integration. Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or adapt it for different OCR providers. We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Trung Tran
Try It Out, HireMind – AI-Driven Resume Intelligence Pipeline! This n8n template demonstrates how to automate resume screening and evaluation using AI to improve candidate processing and reduce manual HR effort. A smart and reliable resume screening pipeline for modern HR teams. This workflow combines Google Drive (JD & CV storage), OpenAI (GPT-4-based evaluation), Google Sheets (position mapping + result log), and Slack/SendGrid integrations for real-time communication. Automatically extract, evaluate, and track candidate applications with clarity and consistency. How it works A candidate submits their application using a form that includes name, email, CV (PDF), and a selected job role. The CV is uploaded to Google Drive for record-keeping and later reference. The Profile Analyzer Agent reads the uploaded resume, extracts structured candidate information, and transforms it into a standardized JSON format using GPT-4 and a custom output parser. The corresponding job description PDF file is automatically retrieved from a Google Sheet based on the selected job role. The HR Expert Agent evaluates the candidate profile against the job description using another GPT-4 model, generating a structured assessment that includes strengths, gaps, and an overall recommendation. The evaluation result is parsed and formatted for output. The evaluation score will be used to mark candidate as qualified or unqualified, based on that an email will be sent to applicant or the message will be send to hiring team for the next process The final evaluation result will be stored in a Google Sheet for long-term tracking and reporting. Google drive structure ├── jd # Google drive folder to store your JD (pdf) │ ├── Backend_Engineer.pdf │ ├── Azure_DevOps_Lead.pdf │ └── ... │ ├── cv # Google drive folder, where workflow upload candidate resume │ ├── John_Doe_DevOps.pdf │ ├── Jane_Smith_FullStack.pdf │ └── ... │ ├── Positions (Sample: https://docs.google.com/spreadsheets/d/1pW0muHp1NXwh2GiRvGVwGGRYCkcMR7z8NyS9wvSPYjs/edit?usp=sharing) # 📋 Mapping Table: Job Role ↔ Job Description (Link) │ └── Columns: │ - Job Role │ - Job Description File URL (PDF in jd/) │ └── Evaluation form (Google Sheet) # ✅ Final AI Evaluation Results How to use Set up credentials and integrations: Connect your OpenAI account (GPT-4 API). Enable Google Cloud APIs: Google Sheets API (for reading job roles and saving evaluation results) Google Drive API (for storing CVs and job descriptions) Set up SendGrid (to send email responses to candidates) Connect Slack (to send messages to the hiring team) Prepare your Google Drive structure: Create a root folder, then inside it create: /jd → Store all job descriptions in PDF format /cv → This is where candidate CVs will be uploaded automatically Create a Google Sheet named Positions with the following structure: | Job Role | Job Description Link | |------------------------------|----------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Update your application form: Use the built-in form, or connect your own (e.g., Typeform, Tally, Webflow, etc.) Ensure the Job Role dropdown matches exactly the roles in the Positions sheet Run the AI workflow: When a candidate submits the form: Their CV is uploaded to the /cv folder The job role is used to match the JD from /jd The Profile Analyzer Agent extracts candidate info from the CV The HR Expert Agent evaluates the candidate against the matched JD using GPT-4 Distribute and store results: Store the evaluation results in the Evaluation form Google Sheet Optionally notify your team: ✉️ Send an email to the candidate using SendGrid 💬 Send a Slack message to the hiring team with a summary and next steps Requirements OpenAI GPT-4 account for both Profile Analyzer and HR Expert Agents Google Drive account (for storing CVs and evaluation sheet) Google Sheets API credentials (for JD source and evaluation results) Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Hiring! 🚀
by Davide
This workflow creates an AI-powered chatbot that generates custom songs through an interactive conversation, then uploads the results to Google Drive. This workflow transforms n8n into a complete AI music production pipeline by combining: Conversational AI Structured data validation Tool orchestration External music generation API Cloud automation It demonstrates a powerful hybrid architecture: LLM Agent + Tools + API + Storage + Async Control Flow Key Advantages 1. ✅ Fully Automated AI Music Production From idea → to lyrics → to full generated track → to cloud storage All handled automatically. 2. ✅ Conversational UX Users don’t need technical knowledge. The AI collects missing information step-by-step. 3. ✅ Smart Tool Selection The agent dynamically chooses: Songwriter tool (for original lyrics) Search tool (for existing lyrics) This makes the system adaptive and intelligent. 4. ✅ Structured & Error-Safe Design Strict JSON schema enforcement Output parsing and validation Cleanup of malformed LLM responses Reduces failure rate dramatically. 5. ✅ Asynchronous API Handling Uses webhook-based resume Handles long-running AI generation Supports multiple song outputs Scalable and production-ready. 6. ✅ Modular & Extensible The architecture allows: Switching LLM provider Changing music API Adding new tools (e.g., cover art generation) Supporting different vocal styles or languages 7. ✅ Memory-Enabled Conversations Uses buffer memory (last 10 messages) Maintains conversational context and continuity. 8. ✅ Automatic File Management Generated songs are: Automatically downloaded Properly renamed Stored in Google Drive No manual file handling required. How it Works Here's the flow: User Interaction: The workflow starts with a chat trigger that receives user messages. A "Music Producer Agent" powered by Google Gemini engages with the user conversationally to gather all necessary song parameters. Data Collection: The agent collects four essential pieces of information: Song title Musical style (genre) Lyrics (prompt) - either generated by calling the "Songwriter" tool or searched online via the "Search songs" tool Negative tags (styles/elements to avoid) Validation & Formatting: The collected data passes through an IF condition checking for valid JSON format, then a Code node parses and cleans the JSON output. A "Fix Json Structure" node ensures proper formatting with strict rules (no line breaks, no double quotes). Song Generation: The formatted data is sent to the Kie.ai API (HTTP Request node) which generates the actual music track. The workflow includes a callback URL for asynchronous processing. Wait & Retrieve: A Wait node pauses execution until the Kie.ai API sends a webhook callback with the generated songs. The "Get songs" node then retrieves the song data. Process Results: The response is split out, and a Loop Over Items node processes each generated song individually. For each song, the workflow: Downloads the audio file via HTTP request Uploads it to a specified Google Drive folder with a timestamped filename Setup steps API Credentials (3 required): Google Gemini (PaLM) API: Configure in the two Gemini Chat Model nodes Gemini Search API: Set up in the "Search songs" tool node Kie AI Bearer Token: Add in the HTTP Request nodes (Create song and Get songs) Google Drive Configuration: Authenticate Google Drive OAuth2 in the "Upload song" node Verify/modify the folder ID if needed Ensure the Drive has proper write permissions Webhook Setup: The Wait node has a webhook ID that needs to be publicly accessible Configure this URL in your Kie.ai API settings as the callback endpoint Optional Customizations: Adjust the AI agent prompts in the "Music Producer Agent" and "Songwriter" nodes Modify song generation parameters in the Kie.ai API call (styleWeight, weirdnessConstraint, etc.) Update the Google Drive folder path for song storage Change the vocal gender or other music generation settings in the "Create song" node Testing: Activate the workflow and start a chat session to test song generation with sample requests like "Write a pop song about summer" or "Find lyrics for 'Bohemian Rhapsody' and make it in rock style" 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Servify
Takes a product image from Google Sheets, adds frozen effect with Gemini, generates ASMR video with Veo3, writes captions with GPT-4o, and posts to 4 platforms automatically. How it works Schedule trigger picks first unprocessed row from Google Sheet Downloads product image and sends to Gemini for frozen/ice effect Uploads frozen image to ImgBB (Veo3 needs public URL) Veo3 generates 10-12s ASMR video with ice cracking sounds GPT-4o writes platform-specific titles and captions Uploads simultaneously to YouTube, TikTok, Instagram, Pinterest Updates sheet status and sends Telegram notification Setup Replace these placeholders in the workflow: YOUR_GOOGLE_AI_API_KEY (Gemini) YOUR_KIE_AI_API_KEY (Veo3) YOUR_IMGBB_API_KEY (free) YOUR_UPLOAD_POST_API_KEY YOUR_GOOGLE_SHEET_ID YOUR_PINTEREST_BOARD_ID YOUR_PINTEREST_USERNAME YOUR_TIKTOK_USERNAME YOUR_INSTAGRAM_USERNAME YOUR_TELEGRAM_CHAT_ID Google Sheet format | topic | image_url | status | |-------|-----------|--------| | Dior Sauvage — Dior | https://example.com/img.jpg | | Leave status empty. Workflow sets it to processing then uploaded. Requirements Gemini API key - Google AI Studio Kie.ai account - kie.ai ImgBB API key - api.imgbb.com OpenAI API key upload-post.com account with connected TikTok/IG/Pinterest YouTube channel with OAuth
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary data—without the need to train your own LLM. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Prerequisites A Pinecone account and API key A GCP project with Google Drive API enabled and configured Note: When setting up the OAuth consent screen, skip steps 8-10 if running on localhost An Open AI account and API key Setup Create a Pinecone Assistant in the Pinecone Console here Name your Assistant n8n-assistant and create it in the United States region If you use a different name or region, update the related nodes to reflect these changes No need to configure a Chat model or Assistant instructions Setup your Google Drive OAuth2 API credential in n8n In the File added node -> Credential to connect with, select Create new credential Set the Client ID and Client Secret from the values generated in the prerequisites Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions) Name this credential Google Drive account so that other nodes reference it Setup Pinecone API key credential in n8n In the Upload file to assistant node -> PineconeApi section, select Create new credential Paste in your Pinecone API key in the API Key field Setup Pinecone MCP Bearer auth credential in n8n In the Pinecone Assistant node -> Credential for Bearer Auth section, select Create new credential Set the Bearer Token field to your Pinecone API key used in the previous step Setup the Open AI credential in n8n In the OpenAI Chat Model node -> Credential to connect with, select Create new credential Set the API Key field to your OpenAI API key Add your files to a Drive folder named n8n-pinecone-demo in the root of your My Drive If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change Activate the workflow or test it with a manual execution to ingest the documents Chat with your docs! Ideas for customizing this workflow Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant Change the top_k value of results returned from Assistant by adding "and should set a top_k of 3" to the System Message to help manage token consumption Configure the Context Window Length in the Conversation Memory node Swap out the Conversation Memory node for one that is more persistent Make the chat node publicly available or create your own chat interface that calls the chat webhook URL. Need help? You can find help by asking in the Pinecone Discord community, asking on the Pinecone Forum, or filing an issue on this repo.
by Don Jayamaha Jr
A fully autonomous, HTX Spot Market AI Agent (Huobi AI Agent) built using GPT-4o and Telegram. This workflow is the primary interface, orchestrating all internal reasoning, trading logic, and output formatting. ⚙️ Core Features 🧠 LLM-Powered Intelligence: Built on GPT-4o with advanced reasoning ⏱️ Multi-Timeframe Support: 15m, 1h, 4h, and 1d indicator logic 🧩 Self-Contained Multi-Agent Workflow: No external subflows required 🧮 Real-Time HTX Market Data: Live spot price, volume, 24h stats, and order book 📲 Telegram Bot Integration: Interact via chat or schedule 🔄 Autonomous Runs: Support for webhook, schedule, or Telegram triggers 📥 Input Examples | User Input | Agent Action | | --------------- | --------------------------------------------- | | btc | Returns 15m + 1h analysis for BTC | | eth 4h | Returns 4-hour swing data for ETH | | bnbusdt today | Full day snapshot with technicals + 24h stats | 🖥️ Telegram Output Sample 📊 BTC/USDT Market Summary 💰 Price: $62,400 📉 24h Stats: High $63,020 | Low $60,780 | Volume: 89,000 BTC 📈 1h Indicators: • RSI: 68.1 → Overbought • MACD: Bearish crossover • BB: Tight squeeze forming • ADX: 26.5 → Strengthening trend 📉 Support: $60,200 📈 Resistance: $63,800 🛠️ Setup Instructions Create your Telegram Bot using @BotFather Add Bot Token in n8n Telegram credentials Add your GPT-4o or OpenAI-compatible key under HTTP credentials in n8n (Optional) Add your HTX API credentials if expanding to authenticated endpoints Deploy this main workflow using: ✅ Webhook (HTTP Request Trigger) ✅ Telegram messages ✅ Cron / Scheduled automation 🎥 Live Demo 🧠 Internal Architecture | Component | Role | | ------------------ | -------------------------------------------------------- | | 🔄 Telegram Trigger | Entry point for external or manual signal | | 🧠 GPT-4o | Symbol + timeframe extraction + strategy generation | | 📊 Data Collector | Internal tools fetch price, indicators, order book, etc. | | 🧮 Reasoning Layer | Merges everything into a trading signal summary | | 💬 Telegram Output | Sends formatted HTML report via Telegram | 📌 Use Case Examples | Scenario | Outcome | | -------------------------------------- | ------------------------------------------------------- | | Auto-run every 4 hours | Sends new HTX signal summary to Telegram | | Human requests “eth 1h” | Bot replies with real-time 1h chart-based summary | | System-wide trigger from another agent | Invokes webhook and returns response to parent workflow | 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn