by Jimleuk
This n8n workflows builds another example of creating a knowledgebase assistant but demonstrates how a more deliberate and targeted approach to ingesting the data can produce much better results for your chatbot. In this example, a government tax code policy document is used. Whilst we could split the document into chunks by content length, we often lose the context of chapters and sections which may be required by the user. Our approach then is to first split the document into chapters and sections before importing into our vector store. Additionally, using metadata correctly is key to allow filtering and scoped queries. Example Human: "Tell me about what the tax code says about cargo for intentional commerce?" AI: "Section 11.25 of the Texas Property Tax Code pertains to "MARINE CARGO CONTAINERS USED EXCLUSIVELY IN INTERNATIONAL COMMERCE." In this section, a person who is a citizen of a foreign country or an en..." How it works The tax code policy document is downloaded as a zip file from the government website and its pages are extracted as separate chapters. Each chapter is then parsed and split into its sections using data manipulation expressions. Each section is then inserted into our Qdrant vector store tagged with its source, chapter and section numbers as metadata. When our AI Agent needs to retrieve data from our vector store, we use a custom workflow tool to perform the query to Qdrant. Because we're relying on Qdrant's advanced filtering capabilities, we perform the search using the Qdrant API rather than the Qdrant node. When the AI Agent, needs to pull full wording or extracts, we can use Qdrant's scroll API and metadata filtering to do so. This makes Qdrant behave like a key-value store for our document. Requirements A Qdrant instance is required for the vector store and specifically for it's filtering functionality. Mistral.ai account for Embeddings and AI models. Customising this workflow Depending on your use-case, consider returning actual PDF pages (or links) to the user for the extra confirmation and to build trust. Not using Mistral? You are able to replace but note to match the distance and dimension size of Qdrant collection to your chosen embedding model.
by Anurag Srivastava
🧠 AI Prompt Generator Workflow – n8n Documentation Who is this for? This workflow is for AI builders, prompt engineers, developers, marketers, and no-code creators who want to convert rough user input into structured, high-quality prompts for LLMs. It’s especially useful for tools that rely on precision prompting and want to automate the discovery of intent and constraints. What problem is this workflow solving? / Use case Many users struggle to write effective prompts due to vague ideas or unclear formatting needs. This workflow: Collects structured user input. Dynamically generates clarifying questions. Returns a well-formatted AI prompt based on the user's intent and context. This ensures the generated prompt is useful for downstream AI agents without requiring technical understanding from the end user. What this workflow does Start with a branded form UI The user is shown a styled form with questions like: What do you want to build? What tools can you access? What input can be expected? What output do you expect? Analyze and generate relevant follow-up questions The workflow sends the user's answers to Google Gemini (via LangChain) which outputs 1–3 clarifying questions. These questions are parsed into a dynamic form. Loop through and collect follow-up answers Each follow-up question is shown in a form one at a time to capture additional context. Merge all inputs The base intent and follow-up responses are merged into a single context block. Generate a final AI-ready prompt The prompt generator node formats everything into a clean, six-section structure: <constraints> <role> <inputs> <tools> <instructions> <conclusions> Display the final result The finished prompt is shown in a clean UI where users can easily copy and reuse it. Setup Credentials Required Google Gemini (PaLM) API credentials (already integrated as Google Gemini(PaLM) Api account 2). Form Trigger Ensure the On form submission trigger is exposed via a webhook or public endpoint (e.g. using ngrok or deployed server). Styling Custom CSS is included in all form nodes for a beautiful UI. You can modify this to match your branding. Environment This workflow is compatible with self-hosted n8n or n8n.cloud. Webhooks must be accessible to users who will fill out the form. How to customize this workflow to your needs Change the base questions** Update the BaseQuestions form node to add or remove fields depending on your use case. Modify Gemini prompts** You can edit the system prompt inside PromptGenerator to change tone, output structure, or AI instructions. Change prompt formatting** If you use a different AI agent (like GPT, Claude, or Mistral), adjust the section labels and formatting to suit that agent’s expected input. Send results elsewhere** Add integration nodes after PromptGenerator, such as: Google Docs / Notion (to log prompts) Gmail / Slack (to notify your team) Zapier / Make (to push to other automation flows) Skip follow-up questions (optional)** If your base form collects all needed info, you can bypass the RelevantQuestions form section by modifying conditional logic. Example Output Prompt (Structure) <role> You are an AI assistant that converts videos into LinkedIn posts with a witty tone. </role> <inputs> - A short video (max 5 minutes) - Desired tone: witty - Style: both summary and quotes - Audience: general network </inputs> <tools> You do not have access to APIs or web search. </tools> <instructions> 1. Parse transcript. 2. Extract insights and quotes. 3. Write an engaging, witty LinkedIn post under 3000 characters. </instructions> <constraints> Avoid technical jargon. No generic intros. Make it platform-native. </constraints> <conclusions> Return a LinkedIn-ready post that starts with a hook and ends with hashtags.
by Jez
Summary This n8n workflow implements an AI-powered "Local Event Finder" agent. It takes user criteria (like event type, city, date, and interests), uses a suite of search tools (Brave Web Search, Brave Local Search, Google Gemini Search) and a web scraper (Jina AI) to find relevant events, and returns formatted details. The entire agent is exposed as a single, easy-to-use MCP (Multi-Capability Peer) tool, making it simple to integrate into other workflows or applications. This template cleverly combines the MCP server endpoint and the AI agent logic into a single n8n workflow file for ease of import and management. Key Features Intelligent Multi-Tool Search:** Dynamically utilizes web search, precise local search, and advanced Gemini semantic search to find events. Detailed Information via Web Scraping:** Employs Jina AI to extract comprehensive details directly from event web pages. Simplified MCP Tool Exposure:** Makes the complex event-finding logic available as a single, callable tool for other MCP-compatible clients (e.g., Roo Code, Cline, other n8n workflows). Customizable AI Behavior:** The core AI agent's behavior, tool usage strategy, and output formatting can be tailored by modifying its System Prompt. Modular Design:** Uses distinct nodes for LLM, memory, and each external tool, allowing for easier modification or extension. Benefits Simplifies Client-Side Integration:** Offloads the complexity of event searching and data extraction from client applications. Provides Richer Event Data:** Goes beyond simple search links to extract and format key event details. Flexible & Adaptable:** Can be adjusted to various event search needs and can incorporate new tools or data sources. Efficient Processing:** Leverages specialized tools for different aspects of the search process. Nodes Used MCP Trigger Tool Workflow Execute Workflow Trigger AI Agent Google Gemini Chat Model (ChatGoogleGenerativeAI) Simple Memory (Window Buffer Memory) MCP Client (for Brave Search tools via Smithery) Google Gemini Search Tool Jina AI Tool Prerequisites An active n8n instance. Google AI API Key:** For the Gemini LLM (Google Gemini Chat Model node) and the Google Gemini Search Tool. Ensure your key is enabled for these services. Jina AI API Key:** For the jina_ai_web_page_scraper node. A free tier is often available. Access to a Brave Search MCP Provider (Optional but Recommended):** This template uses MCP Client nodes configured for Brave Search via a provider like Smithery. You'll need an account/API key for your chosen Brave Search MCP provider to configure the smithery brave search credential. Alternatively, you could adapt these to call Brave Search API directly if you manage your own access, or replace them with other search tools. Setup Instructions Import Workflow: Download the JSON file for this template and import it into your n8n instance. Configure Credentials: Google Gemini LLM: Locate the Google Gemini Chat Model node. Select or create a "Google Gemini API" credential (named Google Gemini Context7 in the template) using your Google AI API Key. Google Gemini Search Tool: Locate the google_gemini_event_search node. Select or create a "Gemini API" credential (named Gemini Credentials account in the template) using your Google AI API Key (ensure it's enabled for Search/Vertex AI). Jina AI Web Scraper: Locate the jina_ai_web_page_scraper node. Select or create a "Jina AI API" credential (named Jina AI account in the template) using your Jina AI API Key. Brave Search (via MCP): You'll need an MCP Client HTTP API credential to connect to your Brave Search MCP provider (e.g., Smithery). Create a new "MCP Client HTTP API" credential in n8n. Name it, for example, smithery brave search. Configure it with the Base URL and any required authentication (e.g., API key in headers) for your Brave Search MCP provider. Locate the brave_web_search and brave_local_search MCP Client nodes in the workflow. Assign the smithery brave search (or your named credential) to both of these nodes. Activate Workflow: Ensure the workflow is active. Note MCP Trigger Path: Locate the local_event_finder (MCP Trigger) node. The Path field (e.g., 0ca88864-ec0a-4c27-a7ec-e28c5a900697) combined with your n8n webhook base URL forms the endpoint for client calls. Example Endpoint: YOUR_N8N_INSTANCE_URL/webhooks/PATH-TO-MCP-SERVER Customization AI Behavior:** Modify the "System Message" parameter within the event_finder_agent node to change the AI's persona, its strategy for using tools, or the desired output format. LLM Model:** Swap the Google Gemini Chat Model node with another compatible LLM node (e.g., OpenAI Chat Model) if desired. You'll need to adjust credentials and potentially the system prompt. Tools:** Add, remove, or replace tool nodes (e.g., use a different search provider, add a weather API tool) and update the event_finder_agent's system prompt and tool configuration accordingly. Scraping Depth:** Be mindful of the jina_ai_web_page_scraper's usage due to potential timeouts. The system prompt already guides the LLM on this, but you can adjust its usage instructions.
by InfraNodus
This template can be used to find the content gaps in PDF documents using the InfraNodus knowledge graph / GraphRAG text representation and then generate ideas / questions / AI prompts that bridge those gaps based on optimizing the knowledge graph's structure. Simply upload several PDF files (research papers, corporate or market reports, etc) and generate an idea in seconds. The template is useful for: generating ideas / questions for research generating content ideas based on competitors' discourse finding blind spots in any discourse and generating ideas that address them. avoiding the generic bias of LLM models and focusing on what's important in your particular context What are Content Gaps and Knowledge Graphs? Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions and ideas that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to generate ideas or research questions / prompts (if you use the InfraNodus question module instead). 4) Step 6: The ideas are then shown to the user in the same web form. Optionally, you can hook this template to your own workflow and send the idea / question generated to your own AI model / agent for further processing. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Trung Tran
🎧 IT Voice Support Automation Bot – Telegram Voice Message to JIRA ticket with OpenAI Whisper > Automatically process IT support requests submitted via Telegram voice messages by transcribing, extracting structured data, creating a JIRA ticket, and notifying relevant parties. 🧑💼 Who’s it for Internal teams that handle IT support but want to streamline voice-based requests. Employees who prefer using mobile/voice to report incidents or ask for support. Organizations aiming to integrate conversational AI into existing support workflows. ⚙️ How it works / What it does A user sends a voice message to a Telegram bot. The system checks whether it’s an audio message. If valid, the audio is: Downloaded Transcribed via OpenAI Whisper Backed up to Google Drive The transcription and file metadata are merged. The merged content is processed through an AI Agent (GPT) to extract structured request info. A JIRA ticket is created using the extracted data. The IT team is notified via Slack (or other channels). The requester receives a Telegram confirmation message with the JIRA ticket link. If the input is not audio, a polite rejection message is sent. 📌 Key Features Supports voice-based ticket creation Accurate transcription using Whisper Context-aware request parsing using GPT-4.1 mini Fully automated ticket creation in JIRA Notifies both IT and the original requester Cloud backup of original voice messages (Google Drive) 🛠️ Setup Instructions Prerequisites | Component | Required | |----------|----------| | Telegram Bot & API Key | ✅ | | OpenAI Whisper / Transcription Model | ✅ | | Google Drive Credentials (OAuth2) | ✅ | | Google Sheets or other storage (optional) | ⬜ | | JIRA Cloud API Access | ✅ | | Slack Bot or Webhook | ✅ | Workflow Steps Telegram Voice Message Trigger: Starts the flow when a user sends a voice message. Is Audio Message?: If false → reply "only voice is supported" Download Audio: Download .oga file from Telegram. Transcribe Audio: Use OpenAI Whisper to get text transcript. Backup to Google Drive: Upload original voice file with metadata. Merge Results: Combine transcript and metadata. Pre-process Output: Clean formatting before AI extraction. Transcript Processing Agent: GPT-based agent extracts: Requester name, department Request title & description Priority & request type Submit JIRA Request Ticket: Create ticket from AI-extracted data. Setup Slack / Email / Manual Steps: Optional internal routing or approvals. Inform Reporter via Telegram: Sends confirmation message with JIRA ticket link. 🔧 How to Customize Replace JIRA with Zendesk, GitHub Issues, or other ticketing tools. Change Slack to Microsoft Teams or Email. Add Notion/Airtable logging. Enhance agent to extract department from user ID or metadata. 📦 Requirements | Integration | Notes | |-------------|-------| | Telegram Bot | Used for input/output | | Google Drive | Audio backup | | OpenAI GPT + Whisper | Transcript & Extraction | | JIRA | Ticketing platform | | Slack | Team notification | Built with ❤️ using n8n
by Nima Salimi
Description🔍 This n8n workflow is a complete marketing automation system that connects to your CDP (Customer Data Platform), selects which flows to send, and delivers personalized emails using Brevo. It's modular and extensible — you can also add SMS, push notifications, Telegram messages, or other channels. To build a full marketing automation system, you need four key components: Workflow Automation – using n8n (this workflow) CDP – store and manage user data (e.g., NocoDB, Metabase, Power BI, etc.) Database – track transactions, templates, and send statuses (e.g., NocoDB) BI / Analytics – monitor performance by flows, journeys, and sent events This workflow represents the Workflow Automation layer. You can connect it to your own data stack or use the included example databases (cdp-ecrm, n8n-templates-ecrm, and n8n-transaction-ecrm) to get started quickly. 👤 Who’s it for? Growth & CRM teams managing user engagement flows Ecommerce marketers running time-sensitive email journeys Marketing automation pros using low-code CRM stacks Data teams building custom campaign triggers from CDPs ✅ Features 🔁 Two modular flows: "Insert user_id" and "Sending Email" 🧠 Select flow using flow_id from templates in NocoDB ✏️ Insert user data into n8n-transaction-ecrm with processing status 🔍 Filter duplicate users by user_id to avoid over-sending 📧 Validate email fields and flag disposables 📨 Send personalized emails using Brevo template parameters 📊 Track delivery with sent_result, sent_at, and status updates 🕒 Runs every 30 minutes via schedule trigger 🛠 How to Use Set your flow In the Setup Flow node, change the flow_id to match a row in your n8n-templates-ecrm table. Prepare your tables in NocoDB cdp-ecrm: contains users (user_id, email, first_name, phone_number) n8n-templates-ecrm: contains flows with metadata n8n-transaction-ecrm: stores and updates user send status Configure credentials NocoDB API Token Brevo (Sendinblue) API Key Trigger the flows Run “Insert user_id” manually or on a schedule to prepare users “Sending Email” runs automatically every 30 minutes 📌 Notes Disposable email domains are filtered using regex Status: 0-processing → just inserted 1-sending → ready to send 2-sent → email sent successfully 3-no-email → missing email address 4-disposal-email → disposable or banned email Easily duplicate the "Insert user_id" flow to add more campaigns
by vinci-king-01
Smart Supplier Health Monitor with ScrapeGraphAI Risk Detection and Multi-Channel Alerts 🎯 Target Audience Procurement managers and directors Supply chain risk analysts CFOs and financial controllers Vendor management teams Enterprise risk managers Operations managers Contract administrators Business continuity planners 🚀 Problem Statement Manual supplier monitoring is reactive and time-consuming, often missing early warning signs of financial distress that could disrupt your supply chain. This template solves the challenge of proactive supplier health surveillance by automatically monitoring financial indicators, news sentiment, and market conditions to predict supplier risks before they impact your business operations. 🔧 How it Works This workflow automatically monitors your critical suppliers' financial health using AI-powered web scraping, analyzes multiple risk factors, identifies alternative suppliers when needed, and sends intelligent alerts through multiple channels to ensure your procurement team can act quickly on emerging risks. Key Components Weekly Health Check Scheduler - Automated trigger based on supplier criticality levels Supplier Database Loader - Dynamic supplier portfolio management with risk-based monitoring frequency ScrapeGraphAI Website Analyzer - AI-powered extraction of financial health indicators from company websites Financial News Scraper - Intelligent monitoring of financial news and sentiment analysis Advanced Risk Scorer - Industry-adjusted risk calculation with failure probability modeling Alternative Supplier Finder - Automated identification and ranking of backup suppliers Multi-Channel Alert System - Email, Slack, and API notifications with escalation rules 📊 Risk Analysis Specifications The template performs comprehensive financial health analysis with the following parameters: | Risk Factor | Weight | Score Impact | Description | |-------------|--------|--------------|-------------| | Financial Issues | 40% | +0-24 points | Revenue decline, debt levels, cash flow problems | | Operational Risks | 30% | +0-18 points | Management changes, restructuring, capacity issues | | Market Risks | 20% | +0-12 points | Industry disruption, regulatory changes, competition | | Reputational Risks | 10% | +0-6 points | Negative news, legal issues, public sentiment | Industry Risk Multipliers: Technology: 1.1x (Higher volatility) Manufacturing: 1.0x (Baseline) Energy: 1.2x (Regulatory risks) Financial: 1.3x (Market sensitivity) Logistics: 0.9x (Generally stable) Risk Levels & Actions: Critical Risk**: Score ≥ 75 (CEO/CFO escalation, immediate transition planning) High Risk**: Score ≥ 55 (Procurement director escalation, backup activation) Medium Risk**: Score ≥ 35 (Manager review, increased monitoring) Low Risk**: Score < 35 (Standard monitoring) 🏢 Supplier Management Features | Feature | Critical Suppliers | High Priority | Medium Priority | |---------|-------------------|---------------|-----------------| | Monitoring Frequency | Weekly | Bi-weekly | Monthly | | Risk Threshold | 35+ points | 40+ points | 50+ points | | Alert Recipients | C-Level + Directors | Directors + Managers | Managers only | | Alternative Suppliers | 3+ pre-qualified | 2+ identified | 1+ researched | | Transition Timeline | 24-48 hours | 1-2 weeks | 1-3 months | 🛠️ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Gmail account for email alerts (or alternative email service) Slack workspace with webhook or bot token Supplier database or CRM system API access Basic understanding of procurement processes Step-by-Step Configuration 1. Configure ScrapeGraphAI Credentials Sign up for ScrapeGraphAI API account Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials with your API key Test the connection to ensure proper functionality 2. Set up Email Integration Add Gmail OAuth2 credentials in n8n Configure sender email and authentication Test email delivery with sample message Set up email templates for different risk levels 3. Configure Slack Integration Create Slack webhook URL or bot token Add Slack credentials to n8n Configure target channels for different alert types Customize Slack message formatting and buttons 4. Load Supplier Database Update the "Supplier Database Loader" node with your supplier data Configure supplier categories, contract values, and criticality levels Set monitoring frequencies based on supplier importance Add supplier website URLs and contact information 5. Customize Risk Parameters Adjust industry risk multipliers for your business context Modify risk scoring thresholds based on risk tolerance Configure economic factor adjustments Set failure probability calculation parameters 6. Configure Alternative Supplier Database Populate the alternative supplier database in the "Alternative Supplier Finder" node Add supplier ratings, capacities, and specialties Configure geographic coverage and certification requirements Set suitability scoring parameters 7. Set up Procurement System Integration Configure the procurement system webhook endpoint Add API authentication credentials Test webhook payload delivery Set up automated data synchronization 8. Test and Validate Run test scenarios with sample supplier data Verify ScrapeGraphAI extraction accuracy Check risk scoring calculations and thresholds Confirm all alert channels are working properly Test alternative supplier recommendations 🔄 Workflow Customization Options Modify Risk Analysis Add custom risk indicators specific to your industry Implement sector-specific economic adjustments Configure contract-specific risk factors Add ESG (Environmental, Social, Governance) scoring Extend Data Sources Integrate credit rating agency APIs (Dun & Bradstreet, Experian) Add financial database connections (Bloomberg, Reuters) Include social media sentiment analysis Connect to government regulatory databases Enhance Alternative Supplier Management Add automated supplier qualification workflows Implement dynamic pricing comparison Create supplier performance scorecards Add geographic risk assessment Advanced Analytics Implement predictive failure modeling Add supplier portfolio optimization Create supply chain risk heatmaps Generate automated compliance reports 📈 Use Cases Supply Chain Risk Management**: Proactive monitoring of supplier financial stability Procurement Optimization**: Data-driven supplier selection and management Business Continuity Planning**: Automated backup supplier identification Financial Risk Assessment**: Early warning system for supplier defaults Contract Management**: Risk-based contract renewal and negotiation Vendor Diversification**: Strategic supplier portfolio management 🚨 Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays between supplier assessments Keep all API credentials secure and rotate them regularly Monitor API usage to manage costs effectively Ensure compliance with data privacy regulations (GDPR, CCPA) Regularly update supplier databases and contact information Review and adjust risk parameters based on market conditions Maintain confidentiality of supplier financial information 🔧 Troubleshooting Common Issues: ScrapeGraphAI extraction errors: Check API key validity and rate limits Email delivery failures: Verify Gmail credentials and permissions Slack notification failures: Check webhook URL and channel permissions False positive alerts: Adjust risk scoring thresholds and industry multipliers Missing supplier data: Verify website URLs and accessibility Alternative supplier errors: Check supplier database completeness Monitoring Best Practices: Set up workflow execution monitoring and error alerts Regularly review and update supplier information Monitor API usage and costs across all integrations Validate risk scoring accuracy with historical data Test disaster recovery and backup procedures Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Procurement best practices and industry standards Financial risk assessment methodologies Supply chain management resources and tools
by Naveen Choudhary
Who is this for? Marketing, content, and enablement teams that need a quick, human-readable summary of every new video published by the YouTube channels they care about—without leaving Slack. What problem does this workflow solve? Manually checking multiple channels, skimming long videos, and pasting the highlights into Slack wastes time. This template automates the whole loop: detect a fresh upload from your selected channels → pull subtitles → distill the key take-aways with GPT-4o-mini → drop a neatly-formatted digest in Slack. What this workflow does Schedule Trigger fires every 10 min, then grabs a list of YouTube RSS feeds from a Google Sheet. HTTP + XML fetch & parse each feed; only brand-new videos continue. YouTube API fetches title/description, RapidAPI grabs English subtitles. Code nodes build an AI payload; OpenAI returns a JSON summary + article. A formatter turns that JSON into Slack Block Kit, and Slack posts it. Processed links are appended back to the “Video Links” sheet to prevent dupes. Setup Make a copy of this Google Sheet and connect a Google Sheets OAuth2 credential with edit rights. Slack App: create → add chat:write, channels:read, app_mention; enable Event Subscriptions; install and store the Bot OAuth token in an n8n Slack credential. RapidAPI key for https://yt-api.p.rapidapi.com/subtitles (300 free calls/mo) → save as HTTP Header Auth. OpenAI key → save in an OpenAI credential. Add your RSS feed URLs to the “RSS Feed URLs” tab; press Execute Workflow. How to customise Adjust the schedule interval or freshness window in “If newly published”. Swap the OpenAI model or prompt for shorter/longer digests. Point the Slack node at a different channel or DM. Extend the AI payload to include thumbnails or engagement stats. Use-case ideas Product marketing**: Instantly brief sales & CS teams when a competitor uploads a feature demo. Internal learning hub**: Auto-summarise conference talks and share bullet-point notes with engineers. Social media managers**: Get ready-to-post captions and key moments for re-purposing across platforms.
by Aditya Gaur
Who is this template for? This template is designed for teams who need to automate data retrieval from SharePoint lists using n8n. It is ideal for users who want to authenticate via OAuth and then use the token to access SharePoint API endpoints, pulling in list data directly into n8n. How it works The template first generates an OAuth token using the Microsoft OAuth API. This token is then used to authenticate requests to the SharePoint List API, allowing the workflow to fetch data from a specified SharePoint list. By following the n8n workflow, the user can configure the necessary credentials and endpoints to automate SharePoint data access securely. Setup steps Step 1: Replace {tenant_id}, {client_id}, and {client_secret} with your Azure AD details for OAuth authentication. Step 2: Specify the SharePoint list API endpoint in the template (under "SharePoint List Fetch" node). Step 3: Configure the SharePoint list URL and make adjustments for specific data fields if necessary.
by Hubschrauber
Overview This template integrates an IOT multi-button switch (meant for controlling a dimmable light) with Spotify playback functions, via MQTT messages. This isn't likely to work without some tinkering, but should be a good head start on receiving/routing IOT/MQTT messages and hooking up to a Spotify-like API. Requirements An IOT device capable of generating events that can be delivered as MQTT messages through an MQTT Broker e.g. Ikea Strybar remote An MQTT Broker to which n8n can connect and consume messages e.g. Zigbee2MQTT in HomeAssistant A Spotify developer-account (which provides access to API functions via OAuth2 authorization) A Spotify user-account (which provides access to Spotify streamed content, user settings, etc.) Setup Create an MQTT Credential item in n8n and assign it to the MQTT Trigger node Modify the MQTT trigger node to match the topic for your IOT device messages Modify the switch/router nodes to map to the message text from your IOT button (e.g. arrow_left_click, brightness_up_click, etc.) Create a Spotify developer-account (or use the login for a user-account) Create an "App" in the developer-account to represent the n8n workflow Chicken/Egg ALERT: The n8n Spotify Credentials dialog box will display the "OAuth Redirect URL" required to create the App in Spotify, but the n8n Credential item itself cannot be created until AFTER the App has been created. Create a Spotify Credentials item in n8n Open the Settings on the Spotify App to find the required Client ID and Client Secret information. ALERT: Save this before proceeding to the Connect step. Connect the n8n Spotify Credential item to the Spotify user-account ALERT: Expect n8n to open a separate OAuth2 window on authorization.spotify.com here, which may require a login to the Spotify user-account Open each of the HTTP and Spotify nodes, one by one, and re-assign to your Spotify Credential (try not to miss any). (Then, probably, upvote this feature request: https://community.n8n.io/t/select-credentials-via-expression/5150 Modify the variable values in the Globals node to match your own environment. target_spotify_playback_device_name - The name of a playback device available to the Spotify user-account favorite_playlist_name - The name of a playlist to start when one of the button actions is indicated in the MQTT message. Used in example "Custom Function 2" sequence. Notes You're on your own for getting the multi-button remote switch talking to MQTT, figuring out what the exact MQTT topic name is, and mapping the message parts to the workflow (actions, etc.). The next / previous actions are wired up to not transfer control to the target device. This alternative routing just illustrates a different behavior than the remaining actions/functions, which include activation of the target device when required. Some of the Spotify API interactions use the Spotify node in n8n, but many of the available Spotify API functions are limited or not implemented at all in the Spotify node. So, in other cases, a regular HTTP node is used with the Spotify OAuth2 API credential instead. By modifying one of the examples included in the template, it should be possible to call nearly anything the Spotify API has to offer. Spotify+n8n OAuth Mini-Tutorial Definitions The developer-account is the Spotify login for creating a spotify-app which will be associated with a client id and client secret. The user-account is the Spotify login that has permission to stream songs, set up playback devices, etc. ++A spotify-login allows access to a Spotify user-account, or a Spotify developer-account, OR BOTH++ The spotify-app, which has a client id and client secret, is an object created in the developer-account. The app-implementation (in this case, an ++n8n workflow++) uses the spotify-app's credentials (client id / client secret) to call Spotify API endpoints on behalf of a user-account. Using One Spotify Login as Both User and Developer When an n8n Spotify-node or HTTP-node (i.e. an app-implementation) calls a Spotify API endpoint, the Credentials item may be using the client id and client secret from a spotify-app, which was created in a developer-account that is ++one and the same spotify-login as the user-account++. However, it helps to remind yourself that from the Spotify API server's perspective, the developer-account + spotify-app, and the user-account, are ++two independent entities++. n8n Spotify-OAuth2-API Credential Authorization Process The 2 layers/steps, in the process of authorizing an n8n Spotify-OAuth2-API credential to make API calls, are: n8n must identify itself to Spotify as the app-implementation associated with the developer-account/spotify-app by sending the app's credentials (client id and client secret) to Spotify. The Client ID and Client Secret are supplied in the n8n Spotify OAuth2 Credentials UI/dialog-box Separately, n8n must obtain an authorization token from Spotify to represent the permissions granted by the user to execute actions (call API endpoints) on behalf of the user (i.e. access things that belong to the user-account). This authorization for the user-account access is obtained when the "Connect" or "Reconnect" button is clicked in the n8n Spotify Credentials UI/dialog-box (which pops up a separate authorization UI/browser-window managed by Spotify). The Authorization for a given spotify-app stays "registered" in the user-account until revoked. See: https://support.spotify.com/us/article/spotify-on-other-apps/ Direct Link: https://www.spotify.com/account/apps/ More than one user-account can be authorized for a given spotify-app. A particular n8n Spotify-OAuth2-API credential item appears to cache an authorization token for the user-account that was most recently authorized. Up to 25 users can be allowed access to a spotify-app in Developer-Mode, but any user-account other than the one associated with the developer-account must be added by email address at https://developer.spotify.com/dashboard/{{app-credential-id}}/users ALERT: IF the browser running the n8n UI is ALSO logged into a Spotify account, and the spotify-app is already authorized for that Spotify account, the "reconnect" button in the Spotify-OAuth2-API credential dialog may automatically grab a token for that logged in user-account, offering no opportunity to select a different user-account. This can be managed somewhat by using "incognito" browser windows for n8n, Spotify, or both. References n8n Spotify Credentials Docs Spotify Authorization Docs
by David Ashby
Complete MCP server exposing 3 Compliance API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Compliance API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Compliance API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Listing_Violation (1 endpoints) • GET /listing_violation: Get Violation Summary Counts 🔧 Listing_Violation_Summary (1 endpoints) • GET /listing_violation_summary: This call returns listing violation counts for a seller 🔧 Suppress_Listing_Violation (1 endpoints) • POST /suppress_listing_violation: Suppress Listing Violation 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Compliance API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 topupsapi API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add topupsapi credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the topupsapi API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://polls.apiblueprint.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Questions (2 endpoints) • GET /questions: Create Question 1 • POST /questions: Create a New Question 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native topupsapi API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.