by Md Sagor Khan
⚡ How it works This workflow automates first responses to new Zendesk tickets with the help of AI and your internal knowledge base. Webhook trigger fires whenever a new ticket is created in Zendesk. Ticket details (subject, description, requester info) are extracted. Knowledge base retrieval – the workflow searches a Supabase vector store (with OpenAI embeddings) for the most relevant KB articles. AI assistant (RAG agent) drafts a professional reply using the retrieved KB and conversation memory stored in Postgres. Decision logic: If no relevant KB info is found (or if it’s a sensitive query like KYC, refunds, or account deletion), the workflow sends a fallback response and tags the ticket for human review. Otherwise, it posts the AI-generated reply and tags the ticket with ai_reply. Logging & context memory ensure future ticket updates are aware of past interactions. 🔧 Set up steps This workflow takes about 15–30 minutes to set up. Connect credentials for Zendesk, OpenAI, Supabase, and Postgres. Prepare your knowledge base: store support content in Supabase (documents table) and embed it using the provided Embeddings node. Set up Postgres memory table (zendesk_ticket_histories) to store conversation history. Update your Zendesk domain in the HTTP Request nodes (<YOUR_ZENDESK_DOMAIN>). Deploy the webhook URL in Zendesk triggers so new tickets flow into n8n. Test by creating a sample ticket and verifying: AI replies appear in Zendesk Correct tags (ai_reply or human_requested) are applied Logs are written to Postgres
by satoshi
Create FAQ articles from Slack threads to Notion and Zendesk This workflow helps you capture "tribal knowledge" shared in Slack conversations and automatically converts it into structured documentation. By simply adding a specific reaction (default: 📚) to a message, the workflow aggregates the thread, uses AI to summarize it into a Q&A format, and publishes it to your knowledge base (Notion and Zendesk). Who is this for? Customer Support Teams** who want to turn internal troubleshooting discussions into public help articles. Knowledge Managers** looking to reduce the friction of documentation. Development Teams** wanting to archive technical decisions made in Slack threads. What it does Trigger: Watches for a specific emoji reaction (📚 :book:) on a Slack message. Data Collection: Fetches the parent message and all replies in the thread to get the full context. AI Processing: Uses OpenAI to analyze the conversation, summarize the solution, and format it into a clear Question & Answer structure. Publishing: Creates a new page in a Notion database with tags and summaries. (Optional) Drafts a new article in Zendesk. Notification: Replies to the original Slack thread with links to the newly created documentation. Requirements n8n** (Self-hosted or Cloud) Slack** workspace (with an App installed that has permissions to read channels and reactions). OpenAI** API Key. Notion** account with an Integration Token. Zendesk** account (optional, can be removed if not needed). How to set up Configure Credentials: Set up authentication for Slack, OpenAI, Notion, and Zendesk in n8n. Setup Notion: Create a database in Notion with the following properties: Name (Title) Summary (Text/Rich Text) Tags (Multi-select) Source (URL) Channel (Select or Text) Update Configuration Node: Open the Workflow Configuration1 node (Set node) and replace the placeholder values: slackWorkspaceId: Your Slack Workspace ID (e.g., T01234567). notionDatabaseId: The ID of your Notion database. zendeskSectionId: (Optional) The ID of the section where articles should be created. Slack App Scopes: Ensure your Slack App has the following scopes: reactions:read, channels:history, groups:history, chat:write. How to customize Change the Trigger:* If you prefer a different emoji (e.g., 📝 or 💡), update the "Right Value" in the *IF - :book: Reaction Check** node. Modify the Prompt:* Edit the *OpenAI** node to change how the AI formats the answer (e.g., ask it to be more technical or more casual). Remove Zendesk:* If you don't use Zendesk, simply delete the *Zendesk* node and remove the reference to it in the final *Slack - Notify Completion** node.
by Yoshino Haruki
Who is this for? This workflow is ideal for filmmakers, video producers, content creators, and location managers who need to quickly build a database of potential shooting locations without manual research and data entry. How it works Chat Input: Start the workflow via the n8n chat interface and enter a search query (e.g., "Quiet cafes in Kyoto" or "Cyberpunk streets"). Search: The workflow queries the Google Maps Places API to find matching real-world locations. AI Analysis: An AI agent (via OpenRouter) reviews the location details and writes a short, creative "Director's Commentary" highlighting its cinematic appeal. Data Entry: The location name, address, rating, Google Maps link, and the AI's commentary are automatically saved to a Google Sheet. Notification: Once all locations are processed, a summary link is sent to your Slack channel. Prerequisites n8n Version**: 1.0 or later Google Cloud Platform**: API Key with "Places API (New)" enabled. Google Sheets**: A formatted sheet (see setup below). Slack**: An App/Bot token with chat writing permissions. OpenRouter** (or OpenAI/Anthropic): API Key for the LLM. How to set up Google Sheet: Create a new sheet with the following headers in the first row: 場所名 (Name) 住所 (Address) 評価(星) (Rating) AI監督のコメント (AI Comment) GoogleMapリンク (Link) Credentials: Configure your credentials for Google Maps, Google Sheets, Slack, and OpenRouter within n8n. Configuration Node: Open the node named "Workflow Configuration" and input your specific details: googleMapsApiKey: Your Google Cloud API key. slackChannelId: The Channel ID where you want notifications (e.g., C0123456). googleSheetId: The string of characters found in your Google Sheet URL. Customization Adjust Results: Change the **Limit node settings to process more locations per run (default is set to 2 to save API credits during testing). Change Persona: Edit the "System Prompt" in the **AI Location Analyzer node to change the AI's tone (e.g., from "Film Director" to "Real Estate Agent" or "Travel Blogger"). Swap LLM**: You can easily replace the OpenRouter node with an OpenAI or Anthropic node if you prefer a different model.
by Jeremiah Wright
Who’s it for Recruiters, freelancers, and ops teams who scan job briefs and want quick, relevant n8n template suggestions, saved in a Google Sheet for tracking. What it does Parses any job text, extracts exactly 5 search keywords, queries the n8n template library, and appends the matched templates (ID, name, description, author) to Google Sheets, including the canonical template URL. How it works Trigger receives a message or paste-in job brief. LLM agent returns 5 concise search terms (JSON). For each keyword, an HTTP request searches the n8n templates API. Results are split and written to Google Sheets; the workflow builds the public URL from ID+slug. Set up Add credentials for OpenAI (or swap the LLM node to your provider). Create a Google Sheet with columns: Template ID, Name, User, Description, URL. In the ⚙️ Config node, set: GOOGLE_SHEETS_DOC_ID, GOOGLE_SHEET_NAME, N8N_TEMPLATES_API_URL. Requirements • n8n (cloud or self-hosted) • OpenAI (or alternative LLM) credentials • Google Sheets OAuth credentials Customize • Change the model/system prompt to tailor keyword extraction. • Swap Google Sheets for Airtable/Notion. • Extend filters (e.g., only AI/CRM templates) before writing rows.
by Teng Wei Herr
How it works You provide a list of prompts and a system instruction, the workflow batches them into a single OpenAI Batch API request. The batch job is tracked in a Supabase openai_batches table. A cron job polls OpenAI every 5 minutes, and once the batch completes, the results are decoded and stored back in Supabase. Set up steps Create the openai_batches table in Supabase. Schema is in the yellow sticky note. Add your OpenAI and Supabase/Postgres credentials to the workflow. Replace the mock data with your actual prompts and you're ready to go!
by Shun Nakayama
This workflow implements cutting-edge concepts from Google DeepMind's OPRO (Optimization by PROmpting) and Stanford's DSPy to automatically refine AI prompts. It iteratively generates, evaluates, and optimizes responses against a ground truth, allowing you to "compile" your prompts for maximum accuracy. Why this is powerful Instead of manually tweaking prompts (trial and error), this workflow treats prompt engineering as an optimization problem: OPRO-style Optimization**: The "Optimizer" LLM analyzes past performance scores and reasons to mathematically deduce a better prompt. DSPy-style Logic**: It separates the "Logic" (Workflow) from the "Parameters" (Prompts), allowing the system to self-correct until it matches the Ground Truth. How it works Define**: Set your initial prompt and a test case with the expected answer (Ground Truth). Generate**: The workflow generates a response using the current prompt. Evaluate**: An AI Evaluator scores the response (0-100) based on accuracy and format. Optimize**: If the score is low, the Optimizer AI analyzes the failure and rewrites the prompt. Loop**: The process repeats until the score reaches 95/100 or the loop limit is hit. Setup steps Configure OpenAI: Ensure you have an OpenAI credential set up in the OpenAI Chat Model node. Customize: Open the Define Initial Prompt & Test Data node and set your initial_prompt, test_input, and ground_truth. Run: Execute the workflow and check the Manage Loop & State node output for the optimized prompt.
by CentralStationCRM
Overview This template benefits anyone who wants to: automate web research on a prospect company compile that research into an easily readable note and save the note into CentralStationCRM Tools in this workflow CentralStationCRM, the easy and intuitive CRM Software for small teams. Here is our API Documentation if you want to customize the workflow. ChatGPT, the well-known ai chatbot Tavily, a web search service for large language models Disclaimer Tavily Web Search is (as of yet) a community node. You have to activate the use of community nodes inside your n8n account to use this workflow. Workflow Screenshot Workflow Description The workflow consists of: a webhook trigger an ai agent node an http request node The Webhook Trigger The Webhook is set up in CentralStationCRM to trigger when a new company is created inside the CRM. The Webhook Trigger Node in n8n then fetches the company data from the CRM. The AI Agent Node The node uses ChatGPT as ai chat model and two Tavily Web Search operations ('search for information' and 'extract URLs') as tools. Additionally, it uses a simple prompt as tool, telling the ai model to re-iterate on the research data if applicable. The AI Agent Node takes the Company Name and prompts ChatGPT to "do a deep research" on this company on the web. "The research shall help sales people get a good overview about the company and allow to identify potential opportunities." The AI Agent then formats the results into markdown format and passes them to the next node. The CentralStationCRM protocol node This is an HTTP Request to the CentralStationCRM API. It creates a 'protocol' (the API's name for notes in the CRM) with the markdown data it received from the previous node. This protocol is saved in CentralStationCRM, where it can easily be accessed as a note when clicking on the new company entry. Customization ideas Even though this workflow is pretty simple, it poses interesting possibilities for customization. For example, you can alter the Webhook trigger (in CentralstationCRM and n8n) to fire when a person is created. You have to alter the AI prompt as well and make sure the third node adds the research note to the person, not a company, via the CentralStationCRM API. You could also swap the AI model used here for another one, comparing the resulting research data and get a deeper understanding of ai chat models. Then of course there is the prompt itself. You can definitely double down on the information you are most interested in and refine your prompt to make the ai bot focus on these areas of search. Start experimenting a bit! Preconditions For this workflow to work, you need a CentralStationCRM account with API Access an n8n account with API Access an Open AI account with API Access Have fun with our workflow!
by Afareayo Soremekun
ChannelCrawler API to Google Slides Template This template shows how you can use the ChannelCrawler API alongside ChatGPT (or any LLM) to generate google slides using images and texts received from the API How it Works A user inputs the link to the Youtube channel(s) of their target creators The list is parsed by a python script, returning it in a format that can be ran in a loop The workflow iterates over each channel url The url is passed to the ChannelCrawler API, where it returns a json of the creators profile. The OpenAI node processes the description and content of the creators profile to create a summary We retrieve the google slides presentation using the get presentation node. We use the Google Slides API to duplicate an existing page and pull back the original page as it has a new revision ID We use the Google Slides API to change the image placeholder of the of the image Presentation Lastly we update other placeholders in with text from the ChannelCrawler and ChatGPT outputs How to Use From executing the workflow, a pop up form will come up where you can insert the Youtube Channel urls On submission, provided the prerequisites are set up - rest of the workflow will be triggered Use Cases You can create profiles on influencers and creators with extensive data points from the ChannelCrawler API and consistent summarisation from GPT Prerequisites ChannelCrawler Account - there's a great pay as you go options for access to the API OpenAI account - the you can access free Open AI credit if you are a first time n8n user! Check the credentials options in the node Google account (For slides) - You should have a google account or sign up for google with your non google email
by Robert Breen
This n8n workflow template creates an intelligent data analysis chatbot that can answer questions about data stored in Google Sheets using OpenAI's GPT-5 Mini model. The system automatically analyzes your spreadsheet data and provides insights through natural language conversations. What This Workflow Does Chat Interface**: Provides a conversational interface for asking questions about your data Smart Data Analysis**: Uses AI to understand column structures and data relationships Google Sheets Integration**: Connects directly to your Google Sheets data Memory Buffer**: Maintains conversation context for follow-up questions Automated Column Detection**: Automatically identifies and describes your data columns 🚀 Try It Out! 1. Set Up OpenAI Connection Get Your API Key Visit the OpenAI API Keys page. Go to OpenAI Billing. Add funds to your billing account. Copy your API key into your OpenAI credentials in n8n (or your chosen platform). 2. Prepare Your Google Sheet Connect Your Data in Google Sheets Data must follow this format: Sample Marketing Data First row** contains column names. Data should be in rows 2–100. Log in using OAuth, then select your workbook and sheet. 3. Ask Questions of Your Data You can ask natural language questions to analyze your marketing data, such as: Total spend** across all campaigns. Spend for Paid Search only**. Month-over-month changes** in ad spend. Top-performing campaigns** by conversion rate. Cost per lead** for each channel. 📬 Need Help or Want to Customize This? 📧 rbreen@ynteractive.com 🔗 LinkedIn 🔗 n8n Automation Experts
by Alberto Idrio
Gmail → AI Summary → Notion + Audio Digest This n8n workflow turns incoming Gmail emails into structured AI summaries and optional audio digests, automatically delivered to Notion and Google Drive. It is designed to reduce email overload by transforming raw messages into concise, readable, and listenable content. What this workflow does On a scheduled basis, the workflow: Retrieves Gmail messages (all subjects or filtered) Marks processed emails as read to avoid duplicates Extracts and normalizes the email body Uses OpenAI to generate a clean, structured summary From the summary, the workflow branches into two outputs: Text summary The final AI-generated summary is appended as a block in Notion Ideal for daily logs, knowledge bases, or team dashboards Audio transcript (optional) The summary text is converted into speech using a TTS model The audio file is uploaded to Google Drive A shareable link is generated The audio reference is added back into Notion Key features Automated Gmail ingestion AI-powered email summarization JavaScript preprocessing for clean input Notion integration for structured storage Text-to-speech audio generation Google Drive hosting for audio files Error-aware branching for TTS generation Idempotent and schedule-safe execution Typical use cases Daily or weekly email digests Executive summaries of inbox activity Audio briefings you can listen to on the go Knowledge capture from important emails Reducing cognitive load from long email threads Who this template is for Professionals dealing with high email volume Teams using Notion as a central workspace n8n users building AI productivity automations Anyone who wants emails summarized instead of skimmed This template is designed to be practical, extensible, and production-ready, and can be easily adapted to: multiple Gmail labels different summary styles alternative TTS providers additional destinations (Slack, Docs, databases)
by Arkadiusz
📝 Workflow Description This workflow creates a conversational bridge between Telegram / n8n Chat and Home Assistant. It allows users to control smart home devices or request information using natural language (text or voice). ⸻ 🔑 Key Features Multi-channel input: Works with both Telegram and n8n’s chat interface. Voice support: Telegram voice messages are transcribed to text using OpenAI Whisper. AI-driven assistant: Google Gemini processes queries in natural language. Home Assistant integration: Uses MCP client tools to execute actions like turning devices on/off, adjusting lights, or broadcasting messages. Memory management: Short-term memory keeps context within conversations. Smart reply routing: Responses are automatically sent back to the correct channel (Telegram or chat). Message formatting: Telegram replies are beautified (bold, bullet points, inline code, links). ⸻ 📌 Node Overview Telegram Trigger: Captures incoming Telegram messages (text or voice). Bot Is Typing: Sends a “typing…” action to indicate the bot is working. Voice or Text: Separates voice and text inputs. Get Voice File → Speech to Text → Transcription to ChatInput: Handles Telegram voice notes by downloading the file, transcribing it, and preparing it for the chat pipeline. When Chat Message Received: Captures messages from n8n’s built-in chat interface. Process Messages: Normalizes incoming data (input text, source, session ID, voice flag). Home Agent: Main AI agent that processes queries. Google Gemini Chat Model: Language model for intent understanding and conversation. Simple Memory & Simple Memory1: Buffer memories to preserve conversation context. Home Assistant Connector: MCP client node that executes smart home actions (turn on/off devices, adjust lights, etc.). Reply Router: Routes the assistant’s response either to Telegram or to the n8n chat webhook. Telegram Message Beautifier → Telegram Send: Formats and sends responses back to Telegram. Respond to Webhook: Sends responses to n8n chat. ⸻ 🚀 Example Use Cases Send “Turn on the living room lights” via Telegram → Bot triggers Home Assistant action. Ask “What’s the temperature in the bedroom?” → Response comes back formatted in Telegram. Record a voice note “Goodnight mode” → Automatically transcribed and executed by Home Assistant. Use n8n chat to quickly trigger automations or check device statuses. ⸻ ⚡️ Benefits Unified chat & voice control for Home Assistant. AI-powered natural language understanding. Works seamlessly across platforms (Telegram & n8n chat). Extensible: new tools or intents can be added easily.
by Julian Kaiser
Turn Your Reading Habit into a Content Creation Engine This workflow is built for one core purpose: to maximize the return on your reading time. It turns your passive consumption of articles and highlights into an active system for generating original content and rediscovering valuable ideas you may have forgotten. Why This Workflow is Valuable End Writer's Block Before It Starts:** This workflow is your personal content strategist. Instead of staring at a blank page, you'll start your week with a list of AI-generated content ideas—from LinkedIn posts and blog articles to strategic insights—all based on the topics you're already deeply engaged with. It finds the hidden connections between articles and suggests novel angles for your next piece. Rescue Your Insights from the Digital Abyss:** Readwise is fantastic for capturing highlights, but the best ones can get lost over time. This workflow acts as your personal curator, automatically excavating the most impactful quotes and notes from your recent reading. It doesn't just show them to you; it contextualizes them within the week's key themes, giving them new life and relevance. Create an Intellectual Flywheel:** By systematically analyzing your reading, generating content ideas, and saving those insights back into your "second brain," you create a powerful feedback loop. Your reading informs your content, and the process of creating content deepens your understanding, making every reading session more valuable than the last. How it works This workflow automates the process of generating a "Weekly Reading Insights" summary based on your activity in Readwise. Trigger:** It can be run manually or on a weekly schedule Fetch Data:** It fetches all articles and highlights you've updated in the last 7 days from your Readwise account. Filter & Match:** It filters for articles that you've read more than 10% of and then finds all the corresponding highlights for those articles. Generate Insights:** It constructs a detailed prompt with your reading data and sends it to an AI model (via OpenRouter) to create a structured analysis of your reading patterns, key themes, and content ideas. Save to Readwise:** Finally, it takes the AI-generated markdown, converts it to HTML, and saves it back to your Readwise account as a new article titled "Weekly Reading Insights". Set up steps Estimated Set Up Time:** 5-10 minutes. Readwise Credentials: Authenticate the two HTTP Request nodes and the two Fetch nodes with your Readwise API token Get from Reader API. Also check how to set up Header Auth AI Model Credentials: Add your OpenRouter API key to the OpenRouter Chat Model node. You can swap this for any other AI model if you prefer. Customize the Prompt: Open the Prepare Prompt Code node to adjust the persona, questions, and desired output format. This is where you can tailor the AI's analysis to your specific needs. Adjust Schedule: Modify the Monday - 09:00 Schedule Trigger to run on your preferred day and time.