by Cooper
Turn Crisp chats into Helpdocs Automatically create help articles from resolved Crisp chats. This n8n workflow listens for chat events, formats Q&A pairs, and uses an LLM to generate a PII‑safe helpdoc saved to a Data Table. Highlights 🧩 Trigger: Crisp Webhook when a chat is marked resolved. 🗂️ Store: Each message saved in a Data Table (crisp). 🧠 Generate: LLM turns Q&A into draft helpdoc. 💾 Save: Draft stored in another Data Table (crisphelp) for review. How it works Webhook receives message:send, message:received, and state:resolved events from Crisp. Data Table stores messages by session_id. On state:resolved, workflow fetches the full chat thread. Code node formats messages into Q: and A: pairs. LLM (OpenAI gpt-4.1-mini) creates a redacted helpdoc. Data Table crisphelp saves the generated doc with publish = false. Requirements Crisp workspace with webhook access (Settings → Advanced → Webhooks) n8n instance with Data Tables and OpenAI credentials Customize Swap the model in the LLM node. Add a Slack or Email node after store-doc to alert reviewers. Extend prompt rules to strengthen PII redaction. Tips Ensure Crisp webhook URL is public. Check IF condition: {{$json.body.data.content.namespace}} == "state:resolved". Use the publish flag to control auto‑publishing. Category: AI • Automation • Customer Support
by InfyOm Technologies
✅ What problem does this workflow solve? Call centers often record conversations for quality control and training, but reviewing every transcript manually is tedious and inefficient. This workflow automates sentiment analysis for each call, providing structured feedback across multiple key categories, so managers can focus on improving performance and training. ⚙️ What does this workflow do? Accepts a Google Sheet containing: Call transcript Agent name Customer name Analyzes each call transcript across multiple sentiment dimensions: 👋 Greeting Sentiment 🧑💼 Agent Friendliness ❓ Problem-Solving Sentiment 🙂 Customer Sentiment 👋 Closing Sentiment ✅ Issue Resolved (Yes/No) Add Conversation Topics discussed in a call Calculates an overall call rating based on combined analysis. Updates the Google Sheet with: Individual sentiment scores Issue resolution status Final call rating 🔧 Setup Instructions 📄 Google Sheets Prepare a sheet with the following columns: Transcript Agent Name Customer Name The workflow will append results in new columns automatically: Greeting Sentiment Closing Sentiment Agent Friendliness Problem Solving Customer Sentiment Issue Resolved Overall Call Rating (out of 5 or 10) 🧠 OpenAI Setup Connect OpenAI API to perform NLP-based sentiment classification. For each transcript, use structured prompts to analyze individual components. 🧠 How it Works – Step-by-Step Sheet Scan – The workflow reads rows from the provided Google Sheet. Loop Through Calls – For each transcript, it: Sends prompts to OpenAI to analyze: Greeting tone (friendly/neutral/rude) Problem-solving quality (clear/confusing/helpful) Closing sentiment Agent attitude Customer satisfaction Whether the issue was resolved Calculates a composite rating from all factors. Update Sheet – All analyzed data is written back into the Google Sheet. 📊 Example Output https://docs.google.com/spreadsheets/d/1aWU28D_73nvkDMPfTkPkaV53kHgX7cg0W4NwLzGFEGU/edit?gid=0#gid=0 👤 Who can use this? This workflow is ideal for: ☎️ Call Centers 🎧 Customer Support Teams 🧠 Training & QA Departments 🏢 BPOs or Support Vendors If you want deeper insight into every customer interaction, this workflow delivers quantified, actionable sentiment metrics automatically. 🛠 Customization Ideas 📅 Add scheduled runs (daily/weekly) to auto-analyze new calls. 📝 Export flagged or low-rated calls into a review dashboard. 🧩 Integrate with Slack or email to send alerts for low-score calls. 🗂 Filter by agent, category, or score to track performance trends. 🚀 Ready to Use? Just connect: ✅ Google Sheets (with transcript data) ✅ OpenAI API …and this workflow will automatically turn your raw call transcripts into actionable sentiment insights.
by Robert Breen
💬 Chat with Your Trello Board (n8n + OpenAI) 📖 Description Turn your Trello board into a conversational assistant. This workflow pulls your board → lists → cards, aggregates the context, and lets you ask natural-language questions (“what’s overdue?”, “summarize In Progress”, “what changed this week?”). OpenAI reasons over the live board data and replies with concise answers or summaries. Great for standups, planning, and quick status checks—without opening Trello. > Setup steps are already embedded in the workflow (Trello API + OpenAI + board URL). Just follow the sticky notes inside the canvas. 🧪 Example prompts “Give me a one-paragraph summary of the board.” “List all cards due this week with their lists.” “What’s blocking items in ‘In Progress’?” “Show new cards added in the last 2 days.” ⚙️ Setup Instructions 1️⃣ Connect Trello (Developer API) Get your API key: https://trello.com/app-key Generate a token (from the same page → Token) In n8n → Credentials → New → Trello API, paste API Key and Token, save. Open each Trello node (Get Board, Get Lists, Get Cards) and select your Trello credential. 2️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 3️⃣ Add Your Board URL to “Get Board” Copy your Trello board URL (e.g., https://trello.com/b/DCpuJbnd/administrative-tasks). Open the Get Board node → Resource: Board, Operation: Get. In ID, choose URL mode and paste the board URL. The node will resolve the board and output its id → used by Get Lists / Get Cards. 📬 Contact Need help customizing this or adding Slack/Email outputs? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Mattis
Stay informed about the latest n8n updates automatically! This workflow monitors the n8n GitHub repository for new pull requests, filters updates from today, generates an AI-powered summary, and sends notifications to your Telegram channel. Who's it for n8n users who want to stay up-to-date with platform changes Development teams tracking n8n updates Anyone managing n8n workflows who needs to know about breaking changes or new features How it works Daily scheduled check at 10 AM for new pull requests Fetches latest PR from n8n GitHub repository Filters to only process today's updates Extracts the pull request summary AI generates a clear, technical summary in English Sends notification to your Telegram channel
by Christian Mendieta
🌟 Complete Workflow Overview The Full Blogging Automation Journey This N8N workflow transforms a simple topic request into a fully published, SEO-optimized blog post through a seamless 7-phase process. Starting with your topic idea, the system automatically researches, creates, optimizes, edits, and publishes professional content to your Ghost CMS website. Think of it as having an entire content team working 24/7 - from initial research to final publication, all orchestrated by AI agents working in perfect harmony. No more writer's block, no more SEO guesswork, just high-quality content that ranks and engages your audience 📋 Requirements & Setup What You Need to Get Started OpenAI API Key** - For GPT models (content generation) Anthropic API Key** - For Claude models as failover model Brave Search API Key** - For comprehensive research Ghost CMS Admin API Access** - For direct publishing Existing Blog Content** - Optional but recommended for better research 🔧 Workflow Architecture & Process How the AI Agents Work Together This N8N workflow implements a sophisticated multi-agent system where specialized AI agents collaborate through structured data exchange. The workflow uses HTTP Request nodes to communicate with OpenAI and Anthropic APIs, integrates with Brave Search for real-time research, and connects to Ghost CMS via REST API calls. Each agent operates independently but shares data through N8N's workflow context, ensuring seamless information flow from research to publication. The system includes error handling, retry logic, and quality gates at each stage to maintain content standards.
by Rapiwa
Who Is This For? This n8n workflow listens for order cancellations in Shopify, extracts relevant customer and order data, checks if the customer’s phone number is registered on WhatsApp via the Rapiwa API, and sends a personalised apology message with a re-order link. It also logs successful and unsuccessful attempts in Google Sheets for tracking. What This Workflow Does Listens for cancelled orders in your Shopify store Extracts customer details and order information Generates a personalised apology message including a reorder link Sends the message to customers via WhatsApp using a messaging API (e.g., Twilio or Rapiwa) Logs the communication results for tracking purposes Key Features Real-Time Cancellation Detection:** Automatically triggers when an order is cancelled Personalised Messaging:** Includes customer name, order details, and a direct reorder link WhatsApp Integration:** Sends messages via WhatsApp for higher engagement Error Handling:** Logs successful and failed message deliveries Reorder Link:** Provides a convenient link for customers to reorder with one click Requirements n8n instance with nodes: Shopify Trigger, HTTP Request (for WhatsApp API), Code, Google Sheets (optional) Shopify store with API access WhatsApp messaging provider account with API access Valid customer phone numbers stored in Shopify orders How to Use — Step-by-Step Setup Credentials Setup Shopify API: Configure Shopify API credentials in n8n to listen for order cancellations WhatsApp API: Set up WhatsApp messaging credentials (e.g., Twilio, Rapiwa, or any supported provider) Google Sheets (Optional): Configure Google Sheets OAuth2 if you want to log communications Configure Trigger Set the workflow to trigger on Shopify order cancellation events Customize Message Content Modify the apology message template to include your store branding and tone Ensure the reorder link dynamically includes the customer's cancelled order info Set Up WhatsApp Node Connect your WhatsApp API credentials Ensure the phone numbers are formatted correctly for WhatsApp delivery Google Sheet Required Columns You’ll need two Google Sheets (or two tabs in one spreadsheet): A Google Sheet formatted like this ➤ sample The workflow uses a Google Sheet with the following columns to track coupon distribution: | Name | Number | Email | Address | Price | Title | Re-order Link | Validity | Status | | -------------- | ------------- | --------------------------------------------------- | ----------------- | ----------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ | ---------- | | Abdul Mannan | 8801322827799 | contact@spagreen.net | Dhaka, Bangladesh | BDT 1955.00 | Pakistani Lawn | Link 🔗 | unverified | not sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | Dhaka, Bangladesh | BDT 1955.00 | Pakistani Lawn | Link 🔗 | verified | sent | Important Notes Phone Number Validation:** Ensure customer phone numbers are WhatsApp-enabled and formatted properly API Rate Limits:** Respect your WhatsApp provider’s API limits to avoid throttling Data Privacy:** Always comply with privacy laws when messaging customers Error Handling:** Monitor logs regularly to handle failed message deliveries Testing:** Test thoroughly with dummy data before activating the workflow live Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Arkadiusz
Workflow Description: Turn a simple text idea into production-ready icons in seconds. With this workflow, you input a subject (e.g., “Copy”, “Banana”, “Slack Mute”), select a style (Flat, 3D, Cartoon, etc.), and off it goes. Here’s what happens: A form trigger collects your icon subject, style and optional background. The workflow uses an LLM to construct an optimised prompt. An image-generation model (OpenAI image API) renders a transparent-background, 400×400 px PNG icon. The icon is automatically uploaded to Google Drive, and both a download link and thumbnail are generated. A styled completion card displays the result and gives you a “One More Time” option. Perfect for designers, developers, no-code creators, UI builders and even home-automation geeks (yes, you can integrate it with Home Assistant or Stream Deck!). It saves you the manual icon-hunt grind and gives consistent visual output across style variants. 🔧 Setup Requirements: n8n instance (self-hosted or cloud) OpenAI API access (image generation enabled) Google Drive credentials (write access to a folder) (Optional) Modify to integrate Slack, Teams or other file-storage destinations ✅ Highlights & Benefits: Fully automated prompt creation → consistent icon quality Transparent background PNGs size-ready for UI use Saves icons to Drive + gives immediate link/thumbnail Minimal setup, high value for creative/automation workflows Easily extendable (add extra sizes, style presets, share via chat/bot) ⚠️ Notes & Best-Practices: Check your OpenAI image quota and costs - image generation may incur usage. Confirm Google Drive folder permissions to avoid upload failures. If you wish a different resolution or format (e.g., SVG), clone the image node and adjust parameters.
by Praneel S
Automate blog updates via Discord with GitHub and customizable AI chatbot > ⚠️ Disclaimer: This template uses the n8n-nodes-discord-trigger community node, which means it works only in self-hosted n8n instances.(works for both cloud and localhost) Who’s it for This workflow is designed for developers, bloggers, and technical writers who want a hands-free way to draft and publish blog posts directly from Discord. Instead of juggling multiple tools, you just send a message to your Discord bot, and the workflow creates a properly formatted Markdown file in your GitHub repo. How it works Listens for new messages in a Discord channel or DM using the Discord Trigger (community node). Passes your message to an AI chatbot model (Google Gemini, OpenAI GPT, or any other connector you prefer) to draft or format the content. Uses GitHub nodes to check existing files, read repo contents, and create new .md posts in the specified directory. Adds the correct timestamp with the Date & Time node. Sends a confirmation reply back to Discord(Regular Message Node). Guardrails ensure it only creates new Markdown files in the correct folder, without overwriting or editing existing content. How to set up Import the workflow (or download the file here BlogAutomationclean.json) into your self-hosted n8n. Install the n8n-nodes-discord-trigger community node inside n8n workflow dashboard(click the link for the steps of setup). Create credentials for: Discord bot trigger from the community node Discord bot send Message from the Regular Discord Message Node GitHub (personal access token with repo permissions) Your AI provider (Gemini, OpenAI, etc.) Update the GitHub nodes with: Owner → your GitHub username Repo → your blog repo name Path → target directory for new Markdown posts Customize the AI agent’s system prompt to match your tone and workflow. (Default prompt included below.) Test it in a private Discord channel before going live. Requirements Self-hosted n8n instance(works both on cloud and localhost) GitHub repository with write access Discord bot credentials(**BOTH ARE REQUIRED: COMMUNITY NODE FOR TRIGGER AND REGULAR NODE** read below for reasoning) AI model credentials (Gemini, OpenAI, or other supported provider) How to customize the workflow Swap the AI model node for any provider you like: Gemini, OpenAI, or even a local LLM. Adjust the prompt to enforce your blog style guide. Add additional steps like auto-publishing, Slack notifications, or Notion syncs. Modify the directory path or file naming rules to fit your project. Reason for Using The Community Discord Trigger Node and Regular Discord Message Node From Testing, the Community Discord node cannot send big messages(Has a Certain limit), while the Original/Regular Discord Message Node can send far beyond that amount which helps for viewing Files. Feel Free to use both trigger and Send Message from the community node if facing issues, it will still work flawless other than message limit Default Prompt Core Identity & Persona You are the n8n Blog Master, a specialized AI agent. Your primary function is to assist your user with content management. Your Mission:** Automate the process of creating, formatting, editing, and saving blog posts as Markdown files within the user’s specified repository. User Clarification:* The repository owner always refers to your *user* and, in the context of API calls, the *repository owner**. It is never part of a file path. Personality:** Helpful, precise, security-conscious. Semi-casual and engaging, but never overly cheerful. Operational Zone & Constraints Repository:* You may only interact with the repository *<insert-repo-name-here>**. Owner:* The repository owner is *<insert-username-here>**. Branch:** Always operate on the main branch. Directory Access:* You can *only* write or edit files in the directory *<insert-directory-path-here>**. You are forbidden from interacting elsewhere. File Permissions:** You may create new .md files. If a file already exists, notify the user and ask if they want to edit it. Editing is only allowed if the user explicitly confirms (e.g., “yes”, “go ahead”, “continue”). If the user confirms, proceed with editing. Available Tools & Usage Protocol You have a limited but well-defined toolset. Always use them exactly as described: 1. Date & Time Tool Purpose: Always fetch the current date and time in IST (India Standard Time). Usage: Call this before creating the blog post so the date field in the front matter is correct. Do not use any other timezone. 2. GitHub Nodes Create:* Used to create new files within *<insert-directory-path-here>**. Requires three parameters: owner → always <insert-username-here> repo → always <insert-repo-name-here> path → must be <insert-directory-path-here>/<filename>.md List:* Can list files inside *<insert-directory-path-here>**. Use it to check existing filenames before creating new ones. Read:** Can fetch contents of files if needed. Edit:* Can update a specific file in *<insert-directory-path-here>**. Protocol: Before editing, explicitly ask: “Are you sure you want me to edit <filename>.md?” If the user responds with “yes”, “continue”, or similar confirmation, proceed. If the user declines, do nothing. Constraint: Never attempt operations outside the specified directory. 3. Data Storage & Message History Purpose: Store temporary user confirmations and recall previous user messages as part of memory. Example: If you ask for edit confirmation and the user replies “yes” or “continue”, record that in storage. If later in the same conversation the user says “go ahead” without repeating the filename, check both storage and previous messages to infer intent. Always reset confirmation after the action is completed. Standard Workflow: Creating or Editing Blog Posts Activation: Begin when the user says: “Draft a new post on…” “Make the body about…” “Use my rough notes…” “Modify it to include…” “Edit the file…” Information Gathering: Ask for the Title (mandatory for new posts). Gather topic, points, or raw notes from the user. If user provides incomplete notes, expand them into a coherent, well-structured article. Drafting & Formatting: Call the Date & Time tool. Format posts in the following template: --- title: "The Title Provided by the User" date: "YYYY-MM-DD" --- [Well-structured blog content goes here. Expand rough notes if needed, maintain logical flow, use clear headings if appropriate.] Thanks for Reading! Writing rules: Tone: Neutral, informative, lightly conversational — not too cheerful. Flow: Use line breaks for readability. Expansion: If notes are provided, polish and structure them. Modification: If asked, revise while preserving original meaning. File Naming: Generate a short kebab-case filename from the title (e.g., "Making My Own Java CLI-Based RPG!" → java-cli-rpg.md). File Creation vs Editing: If creating → Use the GitHub Create tool. If file already exists → Ask the user if they want to edit it. Store their response in Data Storage. If confirmation = yes → proceed with GitHub Edit tool. If no → cancel operation. Final Action: Confirm success to the user after creation or editing. Advanced Error Handling: "Resource Not Found" If the create_github_file tool fails with "Resource not found": First Failure: Notify the user that the attempt failed. State the exact path used. Retry automatically once. Second Failure: If it fails again, explain that standard creation isn’t working. Suggest it may be a permissions issue. Await user instructions before proceeding further. Contact and Changes Feel Free To Contribute to it I do not own anything made here, everything was made by their respective owners Shout-out to katerlol for making the discord Node Trigger Contact me Here if you need any help!
by Matt Chong
Who is this for? This workflow is for professionals, entrepreneurs, or anyone overwhelmed by a cluttered Gmail inbox. If you want to automatically archive low-priority emails using AI, this is the perfect hands-free solution. What does it solve? Your inbox fills up with old, read emails that no longer need your attention but manually archiving them takes time. This workflow uses AI to scan and intelligently decide whether each email should be archived, needs a reply, or is spam. It helps you: Declutter your Gmail inbox automatically Identify important vs. unimportant emails Save time with smart email triage How it works A scheduled trigger runs the workflow (you set how often). It fetches all read emails older than 45 days from Gmail. Each email is passed to an AI model(GPT-4) that classifies it as: Actionable Archive If the AI recommends archiving, the workflow archives the email from your inbox. All other emails are left untouched so you can review them as needed. How to set up? Connect your Gmail (OAuth2) and OpenAI API credentials. Open the "Schedule Trigger" node and choose how often the workflow should run (e.g., daily, weekly). Optionally adjust the Gmail filter in the “List Old Emails” node to change which emails are targeted. Start the workflow and let AI clean up your inbox automatically. How to customize this workflow to your needs Change the Gmail filter**: Edit the query in the Gmail node to include other conditions (e.g., older_than:30d, specific labels, unread only). Update the AI prompt**: Modify the prompt in the Function node to detect more nuanced categories like “Meeting Invite” or “Newsletter.” Adjust schedule frequency**: Change how often the cleanup runs (e.g., hourly, daily).
by Guillaume Duvernay
Create a Telegram bot that answers questions using AI-powered web search from Linkup and an LLM agent (GPT-4.1). This template handles both text and voice messages (voice transcribed via a Mistral model by default), routes queries through an agent that can call a Linkup tool to fetch up-to-date information from the web, and returns concise, Telegram-friendly replies. A security switch lets you restrict use to a single Telegram username for private testing, or remove the filter to make the bot public. Who is this for? Anyone needing quick answers:** Build a personal assistant that can look up current events, facts, and general knowledge on the web. Support & ops teams:** Provide quick, web-sourced answers to user questions without leaving Telegram. Developers & automation engineers:** Use this as a reference for integrating agents, transcription, and web search tools inside n8n. No-code builders:** Quickly deploy a chat interface that uses Linkup for accurate, source-backed answers from the web. What it does / What problem does this solve? Provides accurate, source-backed answers:* Routes queries to *Linkup** so replies are grounded in up-to-date web search results instead of the LLM's static knowledge. Handles voice & text transparently:* Accepts Telegram voice messages, transcribes them (via the *Mistral** API node by default), and treats transcripts the same as typed text. Simple agent + tool architecture:* Uses a *LangChain AI Agent* with a *Web search** tool to separate reasoning from information retrieval. Privacy control:* Includes a *Myself?** filter to restrict access to a specific Telegram username for safe testing. How it works Trigger: Telegram Trigger receives incoming messages (text or voice). Route: Message Router detects voice vs text. Voice files are fetched with Get Audio File. Transcribe: Mistral transcribe receives the audio file and returns a transcript; the transcript or text is normalized into preset_user_message and consolidated in Consolidate user message. Agent: AI Agent (GPT-4.1-mini configured) runs with a system prompt that instructs it to call the Web search tool when up-to-date knowledge is required. Respond: The agent output is sent back to the user via Telegram answer. How to set up Create a Linkup account: Sign up at https://linkup.so to get your API key. They offer a free tier with monthly credits. Add credentials in n8n: Configure Telegram API, OpenAI (or your LLM provider), and Mistral Cloud credentials in n8n. Configure Linkup tool: In the Web search node, find the "Headers" section. In the Authorization header, replace Bearer <your-linkup-api-key> with your actual Linkup API Key. Set Telegram privacy (optional): Edit the Myself? If node and replace <Replace with your Telegram username> with your username to restrict access. Remove the node to allow public use. Adjust transcription (optional): Swap the Mistral transcribe HTTP node for another provider (OpenAI, Whisper, etc.). Connect LLM: In OpenAI Chat Model node, add your OpenAI API key (or configure another LLM node) and ensure the AI Agent node references this model. Activate workflow: Activate the workflow and test by messaging your bot in Telegram. Requirements An n8n instance (cloud or self-hosted) A Telegram Bot token added in n8n credentials A Linkup account and API Key An LLM provider account (OpenAI or equivalent) for the OpenAI Chat Model node A Mistral API key (or other transcription provider) for voice transcription How to take it further Add provenance & sources:** Parse Linkup responses and include short citations or source links in the agent replies. Rich replies:** Use Telegram media (images, files) or inline keyboards to create follow-up actions (open web pages, request feedback, escalate to humans). Multi-user access control:** Replace the single-username filter with a list or role-based access system (Airtable or Google Sheets lookup) to allow multiple trusted users. Logging & analytics:* Save queries and agent responses to *Airtable* or *Google Sheets** for monitoring, quality checks, and prompt improvement.
by iamvaar
Youtube Explanation: [https://youtu.be/KgmNiV7SwkU](https://youtu.be/KgmNiV7SwkU ) This n8n workflow is designed to automate the initial intake and scheduling for a law firm. It's split into two main parts: New Inquiry Handling: Kicks off when a potential client fills out a JotForm, saves their data, and sends them an initial welcome message on WhatsApp. Appointment Scheduling: Activates when the client replies on WhatsApp, allowing an AI agent to chat with them to schedule a consultation. Here’s a detailed breakdown of the prerequisites and each node. Prerequisites Before building this workflow, you'll need accounts and some setup for each of the following services: JotForm JotForm Account**: You need an active JotForm account. A Published Form**: Create a form with the exact fields used in the workflow: Full Name, Email Address, Phone Number, I am a..., Legal Service of Interest, Brief Message, and How Did You Hear About Us?. API Credentials**: Generate API keys from your JotForm account settings to connect it with n8n. Google Google Account**: To use Google Sheets and Google Calendar. Google Sheet**: Create a new sheet named "Law Client Enquiries". The first row must have these exact headers: Full Name, Email Address, Phone Number, client type, Legal Service of Interest, Brief Message, How Did You Hear About Us?. Google Calendar**: An active calendar to manage appointments. Google Cloud Project**: Service Account Credentials (for Sheets): In the Google Cloud Console, create a service account, generate JSON key credentials, and enable the Google Sheets API. You must then share your Google Sheet with the service account's email address (e.g., automation-bot@your-project.iam.gserviceaccount.com). OAuth Credentials (for Calendar): Create OAuth 2.0 Client ID credentials to allow n8n to access your calendar on your behalf. You'll need to enable the Google Calendar API. Gemini API Key: Enable the Vertex AI API in your Google Cloud project and generate an API key to use the Google Gemini models. WhatsApp Meta Business Account**: Required to use the WhatsApp Business Platform. WhatsApp Business Platform Account: You need to set up a business account and connect a phone number to it. This is **different from the regular WhatsApp or WhatsApp Business app. API Credentials**: Get the necessary access tokens and IDs from your Meta for Developers dashboard to connect your business number to n8n. PostgreSQL Database A running PostgreSQL instance**: This can be hosted anywhere (e.g., AWS, DigitalOcean, Supabase). The AI agent needs it to store and retrieve conversation history. Database Credentials**: You'll need the host, port, user, password, and database name to connect n8n to it. Node-by-Node Explanation The workflow is divided into two distinct logical flows. Flow 1: New Client Intake from JotForm This part triggers when a new client submits your form. JotForm Trigger What it does: This is the starting point. It automatically runs the workflow whenever a new submission is received for the specified JotForm (Form ID: 252801824783057). Prerequisites: A JotForm account and a created form. Append or update row in sheet (Google Sheets) What it does: It takes the data from the JotForm submission and adds it to your "Law Client Enquiries" Google Sheet. How it works: It uses the appendOrUpdate operation. It tries to find a row where the "Email Address" column matches the email from the form. If it finds a match, it updates that row; otherwise, it appends a new row at the bottom. Prerequisites: A Google Sheet with the correct headers, shared with your service account. AI Agent What it does: This node crafts the initial welcome message to be sent to the client. How it works: It uses a detailed prompt that defines a persona ("Alex," a legal intake assistant) and instructs the AI to generate a professional WhatsApp message. It dynamically inserts the client's name and service of interest from the Google Sheet data into the prompt. Connected Node: It's powered by the Google Gemini Chat Model. Send message (WhatsApp) What it does: It sends the message generated by the AI Agent to the client. How it works: It takes the client's phone number from the data (Phone Number column) and the AI-generated text (output from the AI Agent node) to send the message via the WhatsApp Business API. Prerequisites: A configured WhatsApp Business Platform account. Flow 2: AI-Powered Scheduling via WhatsApp This part triggers when the client replies to the initial message. WhatsApp Trigger What it does: This node listens for incoming messages on your business's WhatsApp number. When a client replies, it starts this part of the workflow. Prerequisites: A configured WhatsApp Business Platform account. If node What it does: It acts as a simple filter. It checks if the incoming message text is empty. If it is (e.g., a status update), the workflow stops. If it contains text, it proceeds to the AI agent. AI Agent1 What it does: This is the main conversational brain for scheduling. It handles the back-and-forth chat with the client. How it works: Its prompt is highly detailed, instructing it to act as "Alex" and follow a strict procedure for scheduling. It has access to several "tools" to perform actions. Connected Nodes: Google Gemini Chat Model1: The language model that does the thinking. Postgres Chat Memory: Remembers the conversation history with a specific user (keyed by their WhatsApp ID), so the user doesn't have to repeat themselves. Tools: Know about the user enquiry, GET MANY EVENTS..., and Create an event. AI Agent Tools (What the AI can *do*) Know about the user enquiry (Google Sheets Tool): When the AI needs to know who it's talking to, it uses this tool. It takes the user's phone number and looks up their original enquiry details in the "Law Client Enquiries" sheet. GET MANY EVENTS... (Google Calendar Tool): When a client suggests a date, the AI uses this tool to check your Google Calendar for any existing events on that day to see if you're free. Create an event (Google Calendar Tool): Once a time is agreed upon, the AI uses this tool to create the event in your Google Calendar, adding the client as an attendee. Send message1 (WhatsApp) What it does: Sends the AI's response back to the client. This could be a confirmation that the meeting is booked, a question asking for their email, or a suggestion for a different time if the requested slot is busy. How it works: It sends the output text from AI Agent1 to the client's WhatsApp ID, continuing the conversation.
by Tomohiro Goto
🧠 How it works This workflow enables automatic translation in Slack using n8n and OpenAI. When a user types /trans followed by text, n8n detects the language and replies with the translated version via Slack. ⚙️ Features Detects the input language automatically Translates between Japanese ↔ English using GPT-4o-mini (temperature 0.2 for stability) Sends a quick “Translating...” acknowledgement to avoid Slack’s 3s timeout Posts the translated text back to Slack (public or private selectable) Supports overrides like en: こんにちは or ja: hello 💡 Perfect for Global teams communicating in Japanese and English Developers learning how to connect Slack + OpenAI + n8n 🧩 Notes Use sticky notes inside the workflow for setup details. Duplicate and modify it to support mentions, group messages, or other language pairs.