by Cheng Siong Chin
How It Works This workflow automates enterprise ticket management by combining AI-powered classification with knowledge base retrieval. It receives support tickets via webhook, routes them through multiple AI models (OpenAI ChatGPT, NVIDIA's text classification APIs, and embeddings-based search) to determine optimal resolution strategies. The system generates contextual diagnostic logs, formats responses, updates ticket systems, notifies engineers when escalation is needed, and seamlessly integrates with knowledge bases for continuous learning. It solves the critical problem of manual ticket sorting and delayed responses by automating intelligent triage, reducing resolution time, and ensuring consistent quality across support operations. Target audience includes support operations teams, technical support managers, and enterprises managing high-volume ticket queues seeking to improve efficiency and SLA compliance. Setup Steps Configure the OpenAI API key in credentials. Add NVIDIA API credentials for embedding and classification models. Set up Google Sheets for knowledge base storage and retrieval. Connect your ticketing system (Jira, Zendesk, or webhook) for incoming tickets. Link a notification service (Gmail or Slack) for engineer alerts. Map custom fields to your ticket system schema. Prerequisites OpenAI API account with GPT access. NVIDIA API credentials (Embeddings & Classification). Google Sheets for KB management. Ticketing system with webhook capability. Use Cases SaaS support teams triaging 100+ daily tickets, reducing manual sorting by 80%. Technical support escalating complex issues intelligently while documenting solutions. Customization Swap OpenAI models for Claude or Anthropic APIs. Replace Google Sheets with database systems (PostgreSQL, Airtable). Benefits Reduces manual ticket sorting by 70-80%, freeing support staff for complex issues. Decreases average resolution time through intelligent routing.
by 荒城直也
Title: Create daily AI news digest and send to Telegram Description: Stay ahead of the rapidly evolving artificial intelligence landscape without the information overload. This workflow acts as your personal AI news editor, automatically curating, summarizing, and visualizing the top stories of the day, delivered directly to your Telegram. It goes beyond simple RSS aggregation by using an AI Agent to rewrite headlines and summaries into a digestible format and includes a "Chat Mode" where you can ask follow-up questions about the news directly within the n8n interface. Who is it for AI Enthusiasts & Researchers:** Keep up with the latest papers and releases without manually checking multiple sites. Tech Professionals:** Get a morning briefing on industry trends to start your day informed. Content Creators:** Find trending topics for newsletters or social media posts effortlessly. How it works News Aggregation: Every morning at 8:00 AM, the workflow fetches RSS feeds from top tech sources (Google News AI, The Verge, and TechCrunch). Smart Filtering: A Code node aggregates the articles, removes duplicates, and ranks them by recency to select the top 5 stories. AI Summarization: An AI Agent (powered by OpenAI) analyzes the selected stories and writes a concise, engaging summary for each. Visual Generation: DALL-E generates a unique, futuristic header image based on the day's news context. Delivery: The digest is formatted with Markdown and emojis, then sent to your specified Telegram chat. Interactive Chat: A separate branch allows you to chat with an AI Agent via the n8n Chat interface to discuss the news or ask general AI questions. How to set up Configure Credentials: Set up your OpenAI API credential. Set up your Telegram API credential. Get Telegram Chat ID: Create a bot with @BotFather on Telegram. Send a message to your bot. Use @userinfobot to find your numeric Chat ID. Update Workflow Settings: Open the Workflow Configuration node. Paste your Chat ID into the telegramChatId value field. Activate: Toggle the workflow to "Active" to enable the daily schedule. Requirements n8n Version:** Must support LangChain nodes. OpenAI Account:** API Key with access to GPT-4o-mini (or preferred model) and DALL-E 3. Telegram Account:** To create a bot and receive messages. How to customize Change News Sources:** Edit the RSS URLs in the Workflow Configuration node to track different topics (e.g., Crypto, Finance, Sports). Adjust Personality:** Modify the system prompt in the AI News Summarizer Agent node to change the tone of the summaries (e.g., "explain it like I'm 5" or "highly technical"). Change Schedule:** Update the Daily 8 AM Trigger node to your preferred time zone and frequency.
by Kev
Generate ready-to-publish short-form videos from text prompts using AI Click on the image to see the Example output in google drive Transform simple text concepts into professional short-form videos complete with AI-generated visuals, narrator voice, background music, and dynamic text overlays - all automatically generated and ready for Instagram, TikTok, or YouTube Shorts. This workflow demonstrates a cost-effective approach to video automation by combining AI-generated images with audio composition instead of expensive AI video generation. Processing takes 1-2 minutes and outputs professional 9:16 vertical videos optimized for social platforms. The template serves as both a showcase and building block for larger automation systems, with sticky notes providing clear guidance for customization and extension. Who's it for Content creators, social media managers, and marketers who need consistent, high-quality video content without manual production work. Perfect for motivational content, storytelling videos, educational snippets, and brand campaigns. How it works The workflow uses a form trigger to collect video theme, setting, and style preferences. ChatGPT generates cohesive scripts and image prompts, while Google Gemini creates themed background images and OpenAI TTS produces narrator audio. Background music is sourced from Openverse for CC-licensed tracks. All assets are uploaded to JsonCut API which composes the final video with synchronized overlays, transitions, and professional audio mixing. Results are stored in NocoDB for management. How to set up JsonCut API: Sign up at jsoncut.com and create an API key at app.jsoncut.com. Configure HTTP Header Auth credential in n8n with header name x-api-key OpenAI API: Set up credentials for script generation and text-to-speech Google Gemini API: Configure access for Imagen 4.0 image generation NocoDB (Optional): Set up instance for video storage and configure database credentials Requirements JsonCut free account with API key OpenAI API access for GPT and TTS Google Gemini API for image generation NocoDB (optional) for result storage How to customize the workflow This template is designed as a foundation for larger automation systems. The modular structure allows easy modification of AI prompts for different content niches (business, wellness, education), replacement of the form trigger with RSS feeds or database triggers for automated content generation, integration with social media APIs for direct publishing, and customization of visual branding through JsonCut configuration. The workflow can be extended for bulk processing, A/B testing multiple variations, or integration with existing content management systems. Sticky notes throughout the workflow provide detailed guidance for common customizations and scaling options.
by Jan Zaiser
Your inbox is overflowing with daily newsletters: Public Affairs, ESG, Legal, Finance, you name it. You want to stay informed, but reading 10 emails every morning? Impossible. What if you could get one single digest summarizing everything that matters, automatically? ❌ No more copy-pasting text into ChatGPT ❌ No more scrolling through endless email threads ✅ Just one smart, structured daily briefing in your inbox Who Is This For Public Affairs Teams: Stay ahead of political and regulatory updates—without drowning in emails. Executives & Analysts: Get daily summaries of key insights from multiple newsletters. Marketing, Legal, or ESG Departments: Repurpose this workflow for your own content sources. How It Works Gmail collects all newsletters from the day (based on sender or label). HTML noise and formatting are stripped automatically. Long texts are split into chunks and logged in Google Sheets. An AI Agent (Gemini or OpenAI) summarizes all content into one clean daily digest. The workflow structures the summary into an HTML email and sends it to your chosen recipients. Setup Guide • You’ll need Gmail and Google Sheets credentials. • Add your own AI Model (e.g., Gemini or OpenAI) with an API key. • Adjust the prompt inside the “Public Affairs Consultant” node to fit your topic (e.g., Legal, Finance, ESG, Marketing). • Customize the email subject and design inside the “Structure HTML-Mail” node. • Optional: Use Memory3 to let the AI learn your preferred tone and style over time. Cost & Runtime Runs once per day. Typical cost: ~$0.10–0.30 per run (depending on model and input length). Average runtime: <2 minutes.
by Atta
This workflow automates brand monitoring on X by analyzing both the text and the images in posts. It uses multi-modal AI to score brand relevance, filters out noise, logs important mentions in Airtable, and sends real-time alerts to a Telegram group for high-priority posts. What it does Traditional brand monitoring tools often miss the most authentic user content because they only track text. They can't "see" your logo in a photo or your product featured in a video without a direct keyword mention. This workflow acts as an AI agent that overcomes this blind spot. It finds mentions of your brand on X and then uses Google Gemini's multi-modal capabilities to perform a comprehensive analysis of both the text and any attached images. This allows it to understand the full context of a mention, score its relevance to your brand, and take the appropriate action, creating a powerful "visual intelligence" system. How it works The workflow runs on a schedule to find, analyze, and triage brand mentions. Get New Tweets: The workflow begins by using an Apify actor to scrape X for recent posts based on a defined set of search terms (e.g., Tesla OR $TSLA). It then filters these results to find unique mentions not already processed. Check for Duplicates: It cross-references each found tweet with an Airtable base to ensure it hasn't been analyzed before, preventing duplicate work. Analyze Post Content: For each new, unique post, the workflow performs two parallel analyses using Google Gemini: Analyze the Photos: The AI examines the images in the post to describe the scene, identify logos or products, and determine the visual mood. Analyze the Text: A separate AI call analyzes the text of the post to understand its context and sentiment. Final Relevance Check: A "Head Strategist" AI node receives the outputs from both the visual and text analyses. It synthesizes this information to assign a final brand relevance score from 1 to 10. Triage and Action: Based on this score, the workflow automatically triages the post: High Relevance (Score > 7): The post is logged in the Airtable base, and an instant, detailed alert is sent to a Telegram monitoring group. Medium Relevance (Score 4-7): The post is quietly logged in Airtable for later strategic review. Low Relevance (Score < 4): The post is ignored, effectively filtering out noise. Setup Instructions To get this workflow running, you will need to configure your Airtable base and provide credentials for Apify, Google, and Telegram. Required Credentials Apify: You will need an Apify API Token to run the X scraper. Airtable: You will need Airtable API credentials to connect to your base. Google AI: You will need credentials for the Google AI APIs to use the Gemini models. Telegram: You will need a Bot Token and the Chat ID for the channel where you want to receive high-relevance alerts. Of course. Based on the Config node parameters you provided, the setup process is much more centralized. Here is the corrected and rewritten "Step-by-Step Configuration" section. Of course. Here is the rewritten "Step-by-Step Configuration" section with the link to the advanced search documentation. Step-by-Step Configuration Set up Your Airtable Base: Before configuring the workflow, create a new table in your Airtable base. For the workflow to function correctly, this table must contain fields to store the analysis results. Create fields with the following names: postId, postURL, postText, postDateCreated, authorUsername, authorName, sentiment, relevanceScore, relevanceReasoning, mediaPhotosAnalysis, and status. Once the table is created, have your Base ID and Table ID ready to use in the Config node. Edit the Config Node: The majority of the setup is handled in the first Config node. Click on it and edit the following parameters in the "Expressions" tab: searchTerms: Replace the example with the keywords, hashtags, and accounts you want to monitor. The field supports advanced search operators for complex queries. For a full list of available parameters, see the Twitter Advanced Search documentation. airtableBaseId: Paste your Airtable Base ID here. airtableTableId: Paste your Airtable Table ID here. lang: Set the two-letter language code for the posts you want to find (e.g., "en" for English). min_faves: Set the minimum number of "favorites" a post should have to be considered. tweetsToScrape: Define the maximum number of posts the scraper should find in each run. actorId: This is the specific Apify actor for scraping X. You can leave this as is unless you intend to use a different one. Configure the Telegram Node: In the final node, "Send High Relevance Posts to Monitoring Group", you need to manually set the destination for the alerts. Enter the Chat ID for your Telegram group or channel. How to Adapt the Template This workflow is a powerful framework that can be adapted for various monitoring needs. Change the Source:* Replace the *Apify** node with a different trigger or data source. You could monitor Reddit, specific RSS feeds, or a news API for mentions. Customize the AI Logic:* The core of this workflow is in the AI prompts. You can edit the prompts in the *Google Gemini** nodes to change the analysis criteria. For example, you could instruct the AI to check for specific competitor logos, analyze the sentiment of comments, or identify if the post is from an influential account. Modify the Scoring:** Adjust the logic in the "Switch" node to change the thresholds for what constitutes a high, medium, or low-relevance post to better fit your brand's needs. Change the Actions:* Replace the *Telegram** node with a different action. Instead of sending an alert, you could: Create a ticket in a customer support system like Zendesk or Jira. Send a summary email to your marketing team. Add the post to a content curation tool or a social media management platform.
by Robin Geuens
Overview Get a weekly report on website traffic driven by large language models (LLMs) such as ChatGPT, Perplexity, and Gemini. This workflow helps you track how these tools bring visitors to your site. A weekly snapshot can guide better content and marketing decisions. How it works The trigger runs every Monday. Pull the number of sessions on your website by source/medium from Google Analytics. The code node uses the following regex to filter referral traffic from AI providers like ChatGPT, Perplexity, and Gemini: /^.openai.|.copilot.|.chatgpt.|.gemini.|.gpt.|.neeva.|.writesonic.|.nimble.|.outrider.|.perplexity.|.google.bard.|.bard.google.|.bard.|.edgeservices.|.astastic.|.copy.ai.|.bnngpt.|.gemini.google.$/i; Combine the filtered sessions into one list so they can be processed by an LLM. Generate a short report using the filtered data. Email the report to yourself. Setup Get or connect your OpenAI API key and set up your OpenAI credentials in n8n. Enable Google Analytics and Gmail API access in the Google Cloud Console. (Read more here). Set up your Google Analytics and Gmail credentials in n8n. If you're using the cloud version of n8n, you can log in with your Google account to connect them easily. In the Google Analytics node, add your credentials and select the property for the website you’re working with. Alternatively, you can use your property ID, which can be found in the Google Analytics admin panel under Property > Property Details. The property ID is shown in the top-right corner. Add this to the property field. Under Metrics, select the metric you want to measure. This workflow is configured to use sessions, but you can choose others. Leave the dimension as-is, since we need the source/medium dimension to filter LLMs. (Optional) To expand the list of LLMs being filtered, adjust the regex in the code node. You can do this by copying and pasting one of the existing patterns and modifying it. Example: |.example.| The LLM node creates a basic report. If you’d like a more detailed version, adjust the system prompt to specify the details or formatting you want. Add your email address to the Gmail node so the report is delivered to your inbox. Requirements OpenAI API key for report generation Google Analytics API enabled in Google Cloud Console Gmail API enabled in Google Cloud Console Customizing this workflow The regex used to filter LLM referral traffic can be expanded to include specific websites. The system prompt in the AI node can be customized to create a more detailed or styled report.
by Jimleuk
This n8n workflow assists property managers and surveyors by reducing the time and effort it takes to complete property inventory surveys. In such surveys, articles and goods within a property may need to be captured and reported as a matter of record. This can take a sizable amount of time if the property or number of items is big enough. Our solution is to delegate this task to a capable AI Agent who can identify and fill out the details of each item automatically. How it works An AirTable Base is used to capture just the image of an item within the property Our workflow monitoring this AirTable Base sends the photo to an AI image recognition model to describe the item for purpose of identification. Our AI agent uses this description and the help of Google's reverse image search in an attempt to find an online product page for the item. If found, the product page is scraped for the item's specifications which are then used to fill out the rest of the details of the item in our Airtable. Requirements Airtable for capturing photos and product information OpenAI account to for image recognition service and AI for agent SerpAPI account for google reverse image search. Firecrawl.dev account for webspacing. Customising this workflow Try building an internal inventory database to query and integrate into the workflow. This could save on costs by avoiding fetching new each time for common items.
by Abrar Sami
Turn Reddit Questions into SEO Articles Automatically This workflow takes real user questions from Reddit and transforms them into fully structured blog posts — title, intro, steps, and conclusion — using AI. How it works Manually triggered when you want to run it Scrapes the latest posts from a specific subreddit (e.g. r/n8n) Filters only posts that are real questions (based on keywords like “how,” “what,” “why”) Logs relevant questions into a Google Sheet as raw input Enhances each question using AI (rephrases, creates a clean title and slug) Generates full-length blog content: ✏️ Intro paragraph ✅ Step-by-step guide 🧠 Clear conclusion Saves the final blog content to a second Google Sheet for publishing Set up steps You’ll need access to: Reddit API (OAuth) OpenAI API Google Sheets Takes around 15–20 minutes to connect all the credentials and tweak prompts Customize the subreddit or topic focus by changing the Reddit node config Perfect for content teams who want to scale content output using real community pain points — without ever starting from a blank page.
by Mario
Purpose This workflow adds the capability to build a RAG on living data. In this case Notion is used as a Knowledge Base. Whenever a page is updated, the embeddings get upserted in a Supabase Vector Store. It can also be fairly easily adapted to PGVector, Pinecone, or Qdrant by using a custom HTTP request for the latter two. Demo How it works A trigger checks every minute for changes in the Notion Database. The manual polling approach improves accuracy and prevents changes from being lost between cached polling intervals. Afterwards every updated page is processed sequentially The Vector Database is searched using the Notion Page ID stored in the metadata of each embedding. If old entries exist, they are deleted. All blocks of the Notion Database Page are retrieved and combined into a single string The content is embedded and split into chunks if necessary. Metadata, including the Notion Page ID, is added during storage for future reference. A simple Question and Answer Chain enables users to ask questions about the embedded content through the integrated chat function Prerequisites To setup a new Vector Store in Supabase, follow this guide Prepare a simple Database in Notion with each Database Page containing at least a title and some content in the blocks section. You can of course also connect this to an existing Database of your choice. Setup Select your credentials in the nodes which require those If you are on an n8n cloud plan, switch to the native Notion Trigger by activating it and deactivating the Schedule Trigger along with its subsequent Notion Node Choose your Notion Database in the first Node related to Notion Adjust the chunk size and overlap in the Token Splitter to your preference Activate the workflow How to use Populate your Notion Database with useful information and use the chat mode of this workflow to ask questions about it. Updates to a Notion Page should quickly reflect in future conversations.
by n8n Team
This workflow checks if the task in Todoist has a specific label and based on that creates a new database page in Notion. Prerequisites Todoist account and Todoist credentials Notion account and Notion credentials How it works To start the workflow add a task to Todoist and mark it with a label, e.g. “send-to-n8n”. Wait a maximum of 30 seconds. Todoist node identifies the tasks marked as “send-to-n8n”. Notion node creates a new Notion database page. Notice Notion has a new task now with the same name as in Todoist.
by Jimleuk
This n8n template is designed to assist and improve customer support team member capacity by automating the resolution of long-lived and forgotten JIRA issues. How it works Schedule Trigger runs daily to check for long-lived unresolved issues and imports them into the workflow. Each Issue is handled as a separate subworkflow by using an execute workflow node. This allows parallel processing. A report is generated from the issue using its comment history allowing the issue to be classified by AI - determining the state and progress of the issue. If determined to be resolved, sentiment analysis is performed to track customer satisfaction. If negative, a slack message is sent to escalate, otherwise the issue is closed automatically. If no response has been initiated, an AI agent will attempt to search and resolve the issue itself using similar resolved issues or from the notion database. If a solution is found, it is posted to the issue and closed. If the issue is blocked and waiting for responses, then a reminder message is added. How to use This template searches for JIRA issues which are older than 7 days which are not in the "Done" status. Ensure there are some issues that meet this criteria otherwise adjust the search query to suit. Works best if you frequently have long-lived issues that need resolving. Ensure the notion tool is configured as to not read documents you didn't intend it to ie. private and/or internal documentation. Requirements JIRA for issues management OpenAI for LLM Slack for notifications Customising this workflow Why not try classifying issues as they are created? One use-case may be for quality control such as ensuring reporting criteria is adhered to, summarising and rephrasing issue for easier reading or adjusting priority.
by Harsh Maniya
🤖 Universal E-Commerce AI Assistant (Shopify, WooCommerce & RAG) This powerful n8n workflow deploys a sophisticated, multi-talented AI chatbot designed to streamline your e-commerce and customer support operations. The AI assistant can intelligently understand user queries and route them to the correct specialized agent, whether it's for Shopify, WooCommerce, or general knowledge questions answered by a Retrieval-Augmented Generation (RAG) system. This template automates responses to a wide range of inquiries, from checking Shopify order statuses with GraphQL to fetching product lists from WooCommerce, and even answering general questions by looking up information in a Pinecone vector database. How It Works ⚙️ The workflow operates in a series of logical steps, starting from the moment a user sends a message. 💬 Chat Trigger: The workflow activates when a user sends a message in the n8n chat interface. It captures the user's input and a unique session ID to track the conversation. 🧠 Intelligent Routing: The user's query is first sent to a Router Agent powered by GPT-4o-mini. This agent's sole purpose is to classify the intent of the message and output one of three keywords: SHOPIFY, WOOCOMMERCE, or None of them. 🔀 Conditional Branching: Based on the Router's output, a series of IF nodes direct the conversation down one of three paths: General Queries Path Shopify Path WooCommerce Path 📚 General Queries (RAG): If the query is not about e-commerce, it's handled by a RAG agent. Embedding: The user's question is converted into a vector embedding using AWS Bedrock. Retrieval: The workflow searches a Pinecone Vector Store to find the most relevant information from your knowledge base. Generation: A GPT-4o-mini agent receives the context from Pinecone and generates a comprehensive, helpful answer. 🛍️ E-Commerce Specialists: If the query is about Shopify or WooCommerce, it's passed to a dedicated agent. Shopify Agent: This agent uses Google Gemini and has a suite of tools to manage Shopify tasks. It can Get Order info, Fetch All Products, or run complex queries using the powerful GraphQL tool. WooCommerce Agent: This agent also uses Google Gemini and is equipped with tools to Fetch Order Details and Fetch All Products from a WooCommerce store. 🗣️ Conversation Memory: Each agent (Router, General, Shopify, WooCommerce) is connected to its own Memory node. This allows the chatbot to remember previous parts of the conversation for a more natural and context-aware interaction. 🏁 Merge & Respond: All three paths converge at a final Merge node. This ensures that no matter which agent handled the request, the final answer is streamlined into a single output and sent back to the user in the chat. Nodes Used 🔗 Triggers: Chat Trigger: Starts the workflow when a chat message is received. AI & Agents: AI Agent: Four separate agents for Routing, Shopify, WooCommerce, and General Queries. OpenAI Chat Model: Uses GPT-4o-mini for the Router and General Queries agent. Google Gemini Chat Model: Uses Google Gemini for the Shopify and WooCommerce agents. Tools & Data: Shopify Tool: To get products and order information from Shopify. WooCommerce Tool: To get products and order information from WooCommerce. GraphQL Tool: For advanced, custom queries to the Shopify API. Pinecone Vector Store: To retrieve context for the RAG agent. AWS Bedrock Embeddings: To create vector embeddings for Pinecone. Logic & Memory: IF Node: To conditionally route the workflow. Merge Node: To consolidate the different branches before ending. Window Buffer Memory: Four nodes to provide conversational memory to each agent. Setup Guide 🛠️ To use this workflow, you'll need to configure several nodes with your own credentials and settings. 1\. AI Model Credentials OpenAI: Create an API key in your OpenAI Platform dashboard. Add this credential to the Router Model and GPT-4o-mini nodes. Google Gemini: Create an API key in your Google AI Studio dashboard. Add this credential to the Shopify Chat Model and WooCommerce Chat Model nodes. 2\. E-Commerce Platform Credentials Shopify: You will need a Shopify Access Token. Follow the n8n documentation to generate one. Add the credential to the Fetch All Products and Get Order info nodes. WooCommerce: Create API credentials from your WordPress dashboard. Add the credential to the Fetch All Products2 and Fetch Order Details nodes. 3\. RAG System Credentials (Pinecone & AWS) Pinecone: Sign up for a Pinecone account and create an API key. Add your Pinecone credentials in n8n. In the Pinecone Vector Store node, set the pineconeIndex to the name of your index. You must have a pre-existing index with data for the RAG to work. AWS: Create an AWS account and an IAM user with programmatic access to Amazon Bedrock. Add your AWS credentials in n8n. Select your AWS credentials in the AWS Bedrock Embeddings node. 4\. GraphQL Node Configuration In the GraphQL node, you must update the endpoint URL. Replace the placeholder https://{subdomain}.myshopify.com/admin/api/2025-04/graphql.json with your own Shopify store's GraphQL API endpoint.