by Sulieman Said
How to use the provided n8n workflow (step‑by‑step), what matters, what it’s good for, and costs per run. What this workflow does (in simple terms) 1) You write (or speak) your idea in Telegram. 2) The workflow builds two short prompts: Image prompt → generates one thumbnail via KIE.ai – Nano Banana (Gemini 2.5 Flash Image). Video prompt → starts a Veo‑3 (KIE.ai) video job using the thumbnail as init image. 3) You receive the thumbnail first, then the short video back in Telegram once rendering completes. Typical output: 1 PNG thumbnail + 1 short MP4 video (e.g., 8–12 s, 9:16). Why this is useful Rapid ideation**: Turn a quick text/voice idea into a ready‑to‑post thumbnail + matching short video. Consistent look: The video uses the thumbnail as **init image, keeping colors, objects and mood consistent. One chat = full pipeline**: Everything happens directly inside Telegram—no context switches. Agency‑ready**: Collect ideas from clients/team chats, and deliver outputs quickly. What you need before importing 1) KIE.ai account & API key Sign up/in at KIE.ai, go to Dashboard → API / Keys. Copy your KIE_API_KEY (keep it private). 2) Telegram Bot (BotFather) In Telegram, open @BotFather → command /newbot. Choose a name and a unique username (must end with bot). Copy your Bot Token (keep it private). 3) Your Telegram Chat ID (browser method) Send any message to your bot so you have a active chat Open Telegram web and the chat with the bot Find the chatid in the URL Import & minimal configuration (n8n) 1) Import the provided workflow JSON in n8n. 2) Create Credentials: Telegram API: paste your Bot Token. HTTP (KIE.ai): usually you’ll pass Authorization: Bearer {{ $env.KIE_API_KEY }} directly in the HTTP Request node headers, or make a generic HTTP credential that injects the header. 3) Replace hardcoded values in the template: Chat ID: use an Expression like {{$json.message.chat.id}} from the Telegram Trigger (prefer dynamic over hardcoded IDs). Authorization headers: never in query params—always in Headers. Content‑Type spelling: Content-Type (no typos). ` How to run it (basic flow) 1) Start the workflow (activate trigger). 2) Send a message to your bot, e.g. glass hourglass on a black mirror floor, minimal, elegant 3) The bot replies with the thumbnail (PNG), then the Veo‑3 video (MP4). If you send a voice message, the flow will download & transcribe it first, then proceed as above. Pricing (rule of thumb) Image (Nano Banana via KIE.ai):* ~ *$0.02–$0.04** per image (plan‑dependent). Video (Veo‑3 via KIE.ai):** Fast: $0.40 per 8 seconds ($0.05/s) Quality: $2.00 per 8 seconds ($0.25/s) Typical run (1 image + 8 s Fast video) ≈ $0.42–$0.44. > These are indicative values. Check your KIE.ai dashboard for the latest pricing/quotas. Why KIE.ai over the “classic” Google API? Cheaper in practice** for short video clips and image gen in this pipeline. One vendor** for both image & video (same auth, similar responses) = less integration hassle. Quick start**: Playground/tasks/status endpoints are n8n‑friendly for polling workflows. Security & reliability tips Never hardcode* API keys or Chat IDs into nodes—use *Credentials* or *Environment variables**. Add IF + error paths after each HTTP node: If status != 200 → Send friendly Telegram message (“Please try again”) + log to admin. If you use callback URLs for video completion, ensure the URL is publicly reachable (n8n Webhook URL). Otherwise, stick to polling. For rate limits, add a Wait node and limit concurrency in workflow settings. Keep aspect & duration consistent across prompt + API calls to avoid unexpected crops. Advanced: voice input (optional) The template supports voice via a Switch → Download → Transcribe (Whisper/OpenAI). Ensure your OpenAI credential is set and your n8n instance can fetch the audio file from Telegram. Example prompt patterns (keep it short & generic) Thumbnail prompt**: “Minimal, elegant, surreal [OBJECT], clean composition, 9:16” Video prompt**: “Cinematic [OBJECT]. slow camera move, elegant reflections, minimal & surreal mood, 9:16, 8–12s.” You can later replace the simple prompt builder with a dedicated LLM step or a fixed style guide for your brand. Final notes This template focuses on a solid, reliable pipeline first. You can always refine prompts later. Start with Veo‑3 Fast to keep iteration costs low; switch to Quality for final renders. Consider saving outputs (S3/Drive) and logging prompts/URLs to a sheet for audit & analytics. Questions or custom requests? 📩 suliemansaid.business@gmail.com
by Pramod Kumar Rathoure
A RAG Chatbot with n8n and Pinecone Vector Database Retrieval-Augmented Generation (RAG) allows Large Language Models (LLMs) to provide context-aware answers by retrieving information from an external vector database. In this post, we’ll walk through a complete n8n workflow that builds a chatbot capable of answering company policy questions using Pinecone Vector Database and OpenAI models. Our setup has two main parts: Data Loading to RAG – documents (company policies) are ingested from Google Drive, processed, embedded, and stored in Pinecone. Data Retrieval using RAG – user queries are routed through an AI Agent that uses Pinecone to retrieve relevant information and generate precise answers. 1. Data Loading to RAG This workflow section handles document ingestion. Whenever a new policy file is uploaded to Google Drive, it is automatically processed and indexed in Pinecone. Nodes involved: Google Drive Trigger** Watches a specific folder in Google Drive. Any new or updated file triggers the workflow. Google Drive (Download)** Fetches the file (e.g., a PDF policy document) from Google Drive for processing. Recursive Character Text Splitter** Splits long documents into smaller chunks (with a defined overlap). This ensures embeddings remain context-rich and retrieval works effectively. Default Data Loader** Reads the binary document (PDF in this setup) and extracts the text. OpenAI Embeddings** Generates high-dimensional vector representations of each text chunk using OpenAI’s embedding models. Pinecone Vector Store (Insert Mode)** Stores the embeddings into a Pinecone index (n8ntest), under a chosen namespace. This step makes the policy data searchable by semantic similarity. 👉 Example flow: When HR uploads a new Work From Home Policy PDF to Google Drive, it is automatically split, embedded, and indexed in Pinecone. 2. Data Retrieval using RAG Once documents are loaded into Pinecone, the chatbot is ready to handle user queries. This section of the workflow connects the chat interface, AI Agent, and retrieval pipeline. Nodes involved: When Chat Message Received** Acts as the webhook entry point when a user sends a question to the chatbot. AI Agent** The core reasoning engine. It is configured with a system message instructing it to only use Pinecone-backed knowledge when answering. Simple Memory** Keeps track of the conversation context, so the bot can handle multi-turn queries. Vector Store QnA Tool** Queries Pinecone for the most relevant chunks related to the user’s question. In this workflow, it is configured to fetch company policy documents. Pinecone Vector Store (Query Mode)** Acts as the connection to Pinecone, fetching embeddings that best match the query. OpenAI Chat Model** Refines the retrieved chunks into a natural and concise answer. The model ensures answers remain grounded in the source material. Calculator Tool** Optional helper if the query involves numerical reasoning (e.g., leave calculations or benefit amounts). 👉 Example flow: A user asks “How many work-from-home days are allowed per month?”. The AI Agent queries Pinecone through the Vector Store QnA tool, retrieves the relevant section of the HR policy, and returns a concise answer grounded in the actual document. Wrapping Up By combining n8n automation, Pinecone for vector storage, and OpenAI for embeddings + LLM reasoning, we’ve created a self-updating RAG chatbot. Data Loading pipeline** ensures that every new company policy document uploaded to Google Drive is immediately available for semantic search. Data Retrieval pipeline** allows employees to ask natural language questions and get document-backed answers. This setup can easily be adapted for other domains — compliance manuals, tax regulations, legal contracts, or even product documentation.
by Kornel Dubieniecki
AI LinkedIn Content Assistant using Bright Data and NocoDB Who’s it for This template is designed for creators, founders, and automation builders who publish regularly on LinkedIn and want to analyze their content performance using real data. It’s especially useful for users who are already comfortable with n8n and want to build data-grounded AI assistants instead of relying on generic prompts or manual spreadsheets. What this workflow does This workflow builds an AI-powered LinkedIn content assistant backed by real engagement data. It automatically: Scrapes LinkedIn posts and engagement metrics using Bright Data Stores structured post data in NocoDB Enables an AI chat interface in n8n to query and analyze your content Returns insights based on historical performance (not hallucinated data) You can ask questions like: “Which posts performed best last month?” “What content got the most engagement?” “What should I post next?” Requirements Self-hosted or cloud n8n instance Bright Data – LinkedIn scraping & data extraction NocoDB – Open-source Airtable-style database Open AI API – For AI reasoning & insights Setup Import the workflow into your n8n instance Open the Config node and fill in required variables Connect your credentials for Bright Data, NocoDB, and Open AI API Activate the workflow and run the scraper once to populate data How to customize the workflow You can extend this template by: Adding new metrics or post fields in NocoDB Scheduling regular data refreshes Changing the AI system prompt to match your content strategy Connecting additional channels (email, Slack, dashboards) This template is fully modular and designed to be adapted to your workflow. Questions or Need Help? For setup help, customization, or advanced AI workflows, join my 🌟 FREE 🌟 community: Tech Builders Club Happy building! 🚀 - Kornel Dubieniecki
by zahir khan
Screen resumes & save candidate scores to Notion with OpenAI This template helps you automate the initial screening of job candidates by analyzing resumes against your specific job descriptions using AI. 📺 How It Works The workflow automatically monitors a Notion database for new job applications. When a new candidate is added: It checks if the candidate has already been processed to avoid duplicates. It downloads the resume file (supporting both PDF and DOCX formats). It extracts the raw text and sends it to OpenAI along with the specific job description and requirements. The AI acts as a "Senior Technical Recruiter," scoring the candidate on skills, experience, and stability. Finally, it updates the Notion entry with a fit score (0-100), a one-line summary, detected skills, and a detailed analysis. 📄 Notion Database Structure You will need two databases in Notion: Jobs (containing descriptions/requirements) and Candidates (containing resume files). Candidates DB Fields:** AI Comments (Text), Resume Score (Text), Top Skills Detected (Text), Feedback (Select), One Line Summary (Text), Resume File (Files & Media). Jobs DB Fields:** Job Description (Text), Requirements (Text). 👤 Who’s it for This workflow is for recruiters, HR managers, founders, and hiring teams who want to reduce the time spent on manual resume screening. Whether you are handling high-volume applications or looking for specific niche skills, this tool ensures every resume gets a consistent, unbiased first-pass review. 🔧 How to set up Create the required databases in Notion (as described above). Import the .json workflow into your n8n instance. Set up credentials for Notion and OpenAI. Link those credentials in the workflow nodes. Update Database IDs: Open the "Fetch Job Description" and "On New Candidate" nodes and select your specific Notion databases. Run a test with a sample candidate and validate the output in Notion. 📋 Requirements An n8n instance (Cloud or Self-hosted) A Notion account OpenAI API Key (GPT-4o or GPT-4 Turbo recommended for best reasoning) 🧩 How to customize the workflow The system is fully modular. You can: Adjust the Persona:** In the Analyze Candidate agent nodes, edit the system prompt to change the "Recruiter" persona (e.g., make it stricter or focus on soft skills). Change Scoring:** Modify the scoring matrix in the prompt to weight "Education" or "Experience" differently. Filter Logic:** Add a node to automatically disqualify candidates below a certain score (e.g., < 50) and move them to a "Rejected" status in Notion. Multi-language:** Update the prompt to translate summaries into your local language if the resume is in English.
by Madame AI
Real-Time MAP Enforcement & Price Violation Alerts using BrowserAct & slack This n8n template automates MAP (Minimum Advertised Price) enforcement by monitoring reseller websites and alerting you instantly to price violations and stock issues. This workflow is essential for brand owners, manufacturers, and compliance teams who need to proactively monitor their distribution channels and enforce pricing policies. How it works The workflow runs on a Schedule Trigger (e.g., hourly) to continuously monitor product prices. A Google Sheets node fetches your list of resellers, product URLs, and the official MAP price (AP_Price). The Loop Over Items node ensures that each reseller's product is checked individually. A pair of BrowserAct nodes navigate to the reseller's product page and reliably scrape the current live price. A series of If nodes check for violations: The first check (If1) looks for "NoData," signaling that the product is Out of Stock, and sends a specific Slack alert. The second check (If) compares the scraped price to your MAP price, triggering a detailed Slack alert if a MAP violation is found. The workflow loops back to check the next reseller on the list. Requirements BrowserAct** API account for web scraping BrowserAct* "MAP (Minimum Advertised Price) Violation Alerts*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for your price list Slack** credentials for sending alerts Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase I Built a Bot to Catch MAP Violators (n8n + BrowserAct Workflow)
by Madame AI
Automated E-commerce Store Monitoring for New Products Using BrowserAct This n8n template is an advanced competitive intelligence tool that automatically monitors competitor E-commerce/Shopify stores and alerts you the moment they launch a new product. This workflow is essential for e-commerce store owners, product strategists, and marketing teams who need real-time insight into what their competitors are selling. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow runs on a Schedule Trigger to check for new products automatically (e.g., daily). A Google Sheets node fetches your master list of competitor store links from a central sheet. The workflow loops through each competitor one by one. For each competitor, a Google Sheets node first creates a dedicated tracking sheet (if one doesn't exist) to store their product list history. A BrowserAct node then scrapes the competitor's current product list from their live website. The scraped data is saved to the competitor's dedicated tracking sheet. The workflow then fetches the newly scraped list and the previously stored list of products. A custom Code node (labeled "Compare Datas") performs a difference check to reliably detect if any new products have been added. If a new product is detected, an If node triggers an immediate Slack alert to your team, providing real-time competitive insight. Requirements BrowserAct** API account for web scraping BrowserAct* "Competitors Shopify Website New Product Monitor*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for storing and managing data Slack** credentials for sending alerts Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automatically Track Competitor Products | n8n & Google Sheets Template
by Paul
📜 Detailed n8n Workflow Description Main Flow The workflow operates through a three-step process that handles incoming chat messages with intelligent tool orchestration: Message Trigger: The When chat message received node triggers whenever a user message arrives and passes it directly to the Knowledge Agent for processing. Agent Orchestration: The Knowledge Agent serves as the central orchestrator, registering a comprehensive toolkit of capabilities: LLM Processing: Uses Anthropic Chat Model with the claude-sonnet-4-20250514 model to craft final responses Memory Management: Implements Postgres Chat Memory to save and recall conversation context across sessions Reasoning Engine: Incorporates a Think tool to force internal chain-of-thought processing before taking any action Semantic Search: Leverages General knowledge vector store with OpenAI embeddings (1536-dimensional) and Cohere reranking for intelligent content retrieval Structured Queries: Provides structured data Postgres tool for executing queries on relational database tables Drive Integration: Includes search about any doc in google drive functionality to locate specific file IDs File Processing: Connects to Read File From GDrive sub-workflow for fetching and processing various file formats External Intelligence: Offers Message a model in Perplexity for accessing up-to-the-minute web information when internal knowledge proves insufficient Response Generation: After invoking the Think process, the agent intelligently selects appropriate tools based on the query, integrates results from multiple sources, and returns a comprehensive Markdown-formatted answer to the user. Persistent Context Management The workflow maintains conversation continuity through Postgres Chat Memory, which automatically logs every user-agent exchange. This ensures long-term context retention without requiring manual intervention, allowing for sophisticated multi-turn conversations that build upon previous interactions. Semantic Retrieval Pipeline The semantic search system operates through a sophisticated two-stage process: Embedding Generation**: Embeddings OpenAI converts textual content into high-dimensional vector representations Relevance Reranking**: Reranker Cohere reorders search hits to prioritize the most contextually relevant results Knowledge Integration**: Processed results feed into the General knowledge vector store, providing the agent with relevant internal knowledge snippets for enhanced response accuracy Google Drive File Processing The file reading capability handles multiple formats through a structured sub-workflow: Workflow Initiation: The agent calls Read File From GDrive with the selected fileId parameter Sub-workflow Activation: When Executed by Another Workflow node activates the dedicated file processing sub-workflow Operation Validation: Operation node confirms the request type is readFile File Retrieval: Download File1 node retrieves the binary file data from Google Drive Format-Specific Processing: FileType node branches processing based on MIME type: PDF Files: Route through Extract from PDF → Get PDF Response to extract plain text content CSV Files: Process via Extract from CSV → Get CSV Response to obtain comma-delimited text data Image Files: Analyze using Analyse Image with GPT-4o-mini to generate visual descriptions Audio/Video Files: Transcribe using Transcribe Audio with Whisper for text transcript generation Content Integration: The extracted text content returns to Knowledge Agent, which seamlessly weaves it into the final response External Search Capability When internal knowledge sources prove insufficient, the workflow can access current public information through Message a model in Perplexity, ensuring responses remain accurate and up-to-date with the latest available information. Design Highlights The workflow architecture incorporates several key design principles that enhance reliability and reusability: Forced Reasoning**: The mandatory Think step significantly reduces hallucinations and prevents tool misuse by requiring deliberate consideration before action Template Flexibility: The design is intentionally generic—organizations can replace **[your company] placeholders with their specific company name and integrate their own credentials for immediate deployment Documentation Integration**: Sticky notes throughout the canvas serve as inline documentation for workflow creators and maintainers, providing context without affecting runtime performance System Benefits With this comprehensive architecture, the assistant delivers powerful capabilities including long-term memory retention, semantic knowledge retrieval, multi-format file processing, and contextually rich responses tailored specifically for users at [your company]. The system balances sophisticated AI capabilities with practical business requirements, creating a robust foundation for enterprise-grade conversational AI deployment.
by Gene Ishchuk
Summary: This n8n workflow addresses the manual and cumbersome process of exporting handwritten notes from Kindle devices, such as the Kindle Scribe. It is designed to automate the extraction of the note's PDF download link from an email and subsequently save the file to your Google Drive. The Problem Kindle devices that support handwritten notes (e.g., Kindle Scribe) allow users to export a notebook as a PDF file. However, there is no centralized repository or automated export function. The current process requires the user to: Manually request an export for each file on the device. Receive an auto-generated email containing a temporary, unique download URL (rather than the attachment itself). This manual process represents a significant vendor lock-in challenge and a poor user experience. How This Workflow Solves It This template automates the following steps: Email Ingestion: Monitors your Gmail account for the specific export email from Amazon. Link Extraction: Utilizes an LLM service (like DeepSeek, or any other suitable large language model) to accurately parse the email content and extract the unique PDF download URL. PDF Retrieval & Storage: Executes a request to the extracted URL to download the PDF file and then uploads it directly to your Google Drive. Prerequisites To implement and run this workflow, you will need: Kindle Device: A Kindle model that supports handwritten notes and PDF export (e.g., Kindle Scribe). Gmail Account: The account configured on your Kindle device to receive the export emails. LLM Account: Access to an LLM API (e.g., DeepSeek, OpenAI, etc.) to perform the necessary text extraction. Google Drive Credentials: Configured n8n credentials for your Google Drive account. This workflow is designed for easy and quick setup, providing a reliable, automated solution for backing up your valuable handwritten notes.
by edisantosa
This n8n workflow ensures data freshness in the RAG system by handling modifications to existing files. It complements the "Document Ingestion" workflow by triggering whenever a file in the monitored Google Drive folder is updated. This "delete-then-re-insert" process ensures the RAG agent always has access to the most current version of your documents. Key Features & Workflow: Update Trigger: The workflow activates using the File Updated trigger for the same Google Drive folder ("DOCUMENTS"). Duplicate Run Prevention: An If node cleverly filters out immediate "update" events that are triggered by the "Upload Doc" workflow's Word-to-Google-Doc conversion, preventing unecessary duplicate runs. Delete Old Entries: Once a genuine update is detected, the workflow's first action is to find and delete all existing vector chunks associated with that file_id from the Supabase "documents" table. Smart Versioning: It then retrieves the old version number from the deleted metadata and uses an OpenAI node (Set Version) to intelligently increment it (e.g., "v1" becomes "v2"). Re-Ingestion Pipeline: The updated file is then processed through the exact same logic as the "Upload Doc" workflow: It is routed by a Switch node based on its MIME type (PDF, Google Doc, Excel, etc.). Text is extracted, chunked, and embedded. The Enhanced Default Data Loader enriches these new chunks with metadata, including the new, incremented version number. Insert New Entries: Finally, the newly processed and versioned chunks are inserted back into the Supabase Vector Store.
by Rohit Dabra
🧩 Zoho CRM MCP Server Integration (n8n Workflow) 🧠 Overview This n8n flow integrates Zoho CRM with an MCP (Model Context Protocol) Server and OpenAI Chat Model, enabling AI-driven automation for CRM lead management. It allows an AI Agent to create, update, delete, and fetch leads in Zoho CRM through natural language instructions. ▶️ Demo Video Watch the full demo here: 👉 YouTube Demo Video ⚙️ Core Components | Component | Purpose | | ---------------------- | -------------------------------------------------------------------------------------------------- | | MCP Server Trigger | Acts as the entry point for requests sent to the MCP Server (external systems or chat interfaces). | | Zoho CRM Nodes | Handle CRUD operations for leads (create, update, delete, get, getAll). | | AI Agent | Uses the OpenAI Chat Model and Memory to interpret and respond to incoming chat messages. | | OpenAI Chat Model | Provides the LLM (Large Language Model) intelligence for the AI Agent. | | Simple Memory | Stores short-term memory context for chat continuity. | | MCP Client | Bridges communication between the AI Agent and the MCP Server for bi-directional message handling. | 🧭 Flow Description 1. Left Section (MCP Server + Zoho CRM Integration) Trigger:** MCP Server Trigger — receives API requests or chat events. Zoho CRM Actions:** 🟢 Create a lead in Zoho CRM 🔵 Update a lead in Zoho CRM 🟣 Get a lead in Zoho CRM 🟠 Get all leads in Zoho CRM 🔴 Delete a lead in Zoho CRM Each of these nodes connects to the Zoho CRM credentials and performs the respective operation on Zoho CRM’s “Leads” module. 2. Right Section (AI Agent + Chat Flow) Trigger:** When chat message received — initiates flow when a message is received. AI Agent Node:** Uses: OpenAI Chat Model → for natural language understanding and generation. Simple Memory → to maintain context between interactions. MCP Client → to call MCP actions (which include Zoho CRM operations). This creates a conversational interface allowing users to type things like: > “Add a new lead named John Doe with email john@acme.com” The AI agent interprets this and routes the request to the proper Zoho CRM action node automatically. ⚙️ Step-by-Step Configuration Guide 🧩 1. Import the Flow In n8n, go to Workflows → Import. Upload the JSON file of this workflow (or paste the JSON code). Once imported, you’ll see the structure as in the image. 🔐 2. Configure Zoho CRM Credentials You must connect Zoho CRM API to n8n. Go to Credentials → New → Zoho OAuth2 API. Follow Zoho’s official n8n documentation. Provide the following: Environment: Production Data Center: e.g., zoho.in or zoho.com depending on your region Client ID and Client Secret — from Zoho API Console (https://api-console.zoho.com/) Scope: ZohoCRM.modules.leads.ALL Redirect URL: Use the callback URL shown in n8n (copy it before saving credentials) Click Connect and complete the OAuth consent. ✅ Once authenticated, all Zoho CRM nodes (Create, Update, Delete, etc.) will be ready. 🔑 3. Configure OpenAI API Key In n8n, go to Credentials → New → OpenAI API. Enter: API Key: from https://platform.openai.com/account/api-keys Save credentials. In the AI Agent node, select this OpenAI credential under Model. 🧠 4. Configure the AI Agent Open the AI Agent node. Choose: Chat Model: Select your configured OpenAI Chat Model. Memory: Select Simple Memory. Tools: Add MCP Client as the tool. Configure AI instructions (System Prompt) — for example: You are an AI assistant that helps manage leads in Zoho CRM. When the user asks to create, update, or delete a lead, use the appropriate tool. Provide confirmations in natural language. 🧩 5. Configure MCP Server A. MCP Server Trigger Open the MCP Server Trigger node. Note down the endpoint URL — this acts as the API entry point for external requests. It listens for incoming POST requests from your MCP client or chat interface. B. MCP Client Node In the AI Agent, link the MCP Client node. Configure it to send requests back to your MCP Server endpoint (for 2-way communication). > 🔄 This enables a continuous conversation loop between external clients and the AI-powered CRM automation system. 🧪 6. Test the Flow Once everything is connected: Activate the workflow. From your chat interface or Postman, send a message to the MCP Server endpoint: { "message": "Create a new lead named Alice Johnson with email alice@zoho.com" } Observe: The AI Agent interprets the intent. Calls Zoho CRM Create Lead node. Returns a success message with lead ID. 🧰 Example Use Cases | User Query | Action Triggered | | ------------------------------------------------- | ----------------------- | | “Add John as a lead with phone number 9876543210” | Create lead in Zoho CRM | | “Update John’s company to Acme Inc.” | Update lead in Zoho CRM | | “Show me all leads from last week” | Get All Leads | | “Delete lead John Doe” | Delete lead | 🧱 Tech Stack Summary | Layer | Technology | | ---------------------- | ---------------------------- | | Automation Engine | n8n | | AI Layer | OpenAI GPT Chat Model | | CRM | Zoho CRM | | Communication Protocol | MCP (Model Context Protocol) | | Memory | Simple Memory | | Trigger | HTTP-based MCP Server | ✅ Best Practices 🔄 Refresh Tokens Regularly — Zoho tokens expire; ensure auto-refresh setup. 🧹 Use Environment Variables for API keys instead of hardcoding. 🧠 Fine-tune System Prompts for better AI understanding. 📊 Enable Logging for request/response tracking. 🔐 Restrict MCP Server Access with an API key or JWT token.
by DIGITAL BIZ TECH
AI Carousel Caption & Template Editor Workflow Overview This workflow is a caption-only carousel text generator built in n8n. It turns any raw LinkedIn post or text input into 3 short, slide-ready title + subtext captions and renders those captions onto image templates. Output is a single aggregated response with markdown image embeds and download links. Workflow Structure Input:** Chat UI trigger accepts text and optional template selection. Core AI:** Agent cleans input and returns structured JSON with 3 caption pairs. Template Rendering:** Edit Image nodes render title and subtext on chosen templates. Storage:** Rendered images uploaded to S3. Aggregate Output:** Aggregate node builds final markdown response with embeds and download links. Chat Trigger (Frontend) Trigger:** When chat message received UI accepts plain text post. allowFileUploads optional for template images. SessionId preserved for context. AI Agent (Core) Node name:** AI Agent Model:** Mistral Cloud Chat Model (mistral-small-latest) Behavior:** Clean input (remove stray formatting like \n and ** but keep emojis). Produce exactly one JSON object with fields: postclean, title1, subtext1, title2, subtext2, title3, subtext3. Titles must be short (max 5 words). Subtext 1 or 2 short sentences, max 7 words per line if possible. Agent must return valid JSON to be parsed by the Structured Output Parser. Structured Output Parser Node name:** Structured Output Parser Validates agent JSON and prevents downstream errors. If parsing fails, stop and surface error. Normalize Title Nodes Nodes:** normalize title,name 1, normalize title,name 2, normalize title,name 3 (and optional 4) Map parsed output into node fields: title, subtext, safeName (safe filename for exports). Template Images Source:** Google Drive template PNGs (download via Google Drive nodes) or provided upload. Keep templates high resolution and consistent aspect ratio. Edit Image Nodes (Render Captions) Nodes:** Edit Image 1, Edit Image2, Edit Image3, Edit Image3 (or Edit Image3/Edit Image4 as available) MultiStep operations render: Title text (font, size, position) Subtext (font, size, position) This is where caption text is added to the template. Upload to S3 Nodes:** S3 Upload rendered images to bucketname using safeName filenames. Confirm public access or use signed URLs. Get S3 URLs and Aggregate Nodes:** get s3 url image 1, get s3 url image 2, get s3 url image 3, get s3 url image 4 Merge + Aggregate:** Merge1 and Aggregate collect image items. Output Format:** output format builds a single markdown message: Inline image embeds `` Download links per image. Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | AI agent model | Mistral account | | Google Drive | Template image storage | Google Drive account | | S3 | Store rendered images and serve links | Supabase account | | n8n Core | Flow control, parsing, image editing | Native | Agent System Prompt Summary > You are a data formatter and banner caption creator. Clean the user input (remove stray newlines and markup but keep emojis). Return a single JSON object with postclean, title1/subtext1, title2/subtext2, title3/subtext3. Titles must be short (max 5 words). Subtext should be 1 to 2 short sentences, useful and value adding. Respond only with JSON. Key Features Caption only output: 3 short slide-ready caption pairs. Structured JSON output enforced by a parser for reliability. Renders captions onto image templates using Edit Image nodes. Uploads images to S3 and returns markdown embeds plus download links. Template editable: swap Google Drive background templates or upload your own. Zero guess formatting: agent must produce parseable JSON to avoid downstream failures. Summary A compact n8n workflow that converts raw LinkedIn text into a caption-only carousel with rendered images. It enforces tight caption rules, validates AI JSON, places captions on templates, uploads images, and returns a single ready-to-post markdown payload. Need Help or More Workflows? We can wire this into your account, replace templates, or customize fonts, positions, and export options. We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by OwenLee
Description 💸💬 Slack Pro is powerful — but the price hurts, especially for growing teams. This workflow is designed as a low-cost alternative solution that provides some Slack Pro functions (searchable history + AI summaries) while you stay on the free Slack plan (or minimal paid seats). What is the advantage? 🧠 AI Slack assistant on demand – @mention the bot in any channel to get clear summaries of recent discussions (“yesterday”, “last 7 days”, “this week”, etc.). 🗄️ External message history – recent messages are routinely saved into Google Drive, so important conversations live outside Slack’s 90-day / 10k-message limit. 💰 Cost-efficient setup – rely on Slack free plan + a little Google Drive storage + low-cost AI API, instead of paying Slack Pro ($8.75 USD per user / month). 📚 Business value – you keep the benefits you wanted from Slack Pro (memory, context, easy catch-up) while avoiding a big monthly bill. 🧠 Upgrade your Slack for free with AI chat summaries & history archiving 👥 Who’s it for 💰 Teams stuck on Slack Free because Pro is too expensive (e.g. founders, small teams) Want longer history and better context, but can’t justify per-seat upgrades. Need “Pro-like” benefits (search, memory, recap) in a budget-friendly way. ⚙️ How it works 📝 Slack stays as your main chat tool: People talk in channels the way they already do. 🤖 You add a bot powered by this workflow: When someone @mentions it with something like (@SlackHistoryBot summarize this week). 📆 On a schedule (e.g. monthly), it backs up channels: Walks through channels the bot can access and saves recent messages (e.g. last 30 days) as a CSV file into Google Drive. 🛠️ How to set up 🔑 Connect credentials (once) Slack (Bot / App): recommend other tutorial video Create and configure a bot. Create a credential. Invite the bot to channels you want to cover. Google Drive Connect a Google account for storage. Create a folder like Slack History (Archived) in Drive and select it in the workflow. AI Provider (e.g. DeepSeek) Grab any LLM API key. Plug it into the AI node so summaries use that model. 🚀 Quick Start Import the JSON workflow. Attach your credentials. Save and activate the workflow. Try a real-world test: In a test channel, have a short conversation. Then try @(your bot name) summarize today. Check that archives appear: Manually trigger the “archive” part from your automation tool. You should see files named after your channels and time period in Google Drive. 🧰 How to Customize the Workflow Limit where it runs Only invite the bot to “high value” channels (projects, clients, leadership). This keeps both AI and storage usage under control. Adjust archive frequency ⏰ Monthly is usually enough; weekly only for critical channels. Less frequent archives = fewer operations = lower cost. *Customize the summary style (system prompt) *📃 What language to use (e.g. Chinese by default, or English, or both). How to structure the summary (topics, bullets, separators). What to focus on (projects, decisions, tasks, risks, etc.). 📩 Help & customize other slack function Contact: owenlzyxg@gmail.com