by moosa
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🚀 Overview This workflow enables a powerful AI-driven virtual assistant that dynamically responds to website queries using webhook input, Pinecone vector search, and OpenAI agents — all smartly routed based on the source website. 🔧 How It Works Webhook Trigger The workflow starts with a Webhook node that receives query parameters: query: The user's question userId: Unique user identifier site: Website identifier (e.g., test_site) page: Page identifier (e.g., homepage, pricing) Smart Routing A Switch node directs the request to the correct AI agent based on the site value. Each AI agent uses: OpenAI GPT-4/3.5 model Pinecone vector store for context-aware answers SQL-based memory for consistent multi-turn conversation Contextual AI Agent Each agent is customized per website using: Site-specific Pinecone namespaces Predefined system prompts to stay in scope Webhook context including page, site, and userId Final Response The response is sent back to the originating website using the Respond to Webhook node. 🧠 Use Case Ideal for multi-site platforms that want to serve tailored AI chat experiences per domain or page — whether it’s support, content discovery, or interactive agents. ✅ Highlights 🧠 Vector search using Pinecone for contextual responses 🔀 Website-aware logic with Switch node routing 🔐 No hardcoded API keys 🧩 Modular agents for scalable multi-site support
by 飯盛 正幹
Analyze Furusato Nozei trends from Google News to Slack This workflow acts as a specialized market analyst for Japan's "Furusato Nozei" (Hometown Tax) system. It automates the process of monitoring related news, validating keyword popularity via search trends, and delivering a concise, strategic report to Slack. By combining RSS feeds, AI agents, and real-time search data, this template helps marketers and municipal researchers stay on top of the highly competitive Hometown Tax market without manual searching. 👥 Who is this for? Municipal Government Planners:** To track trending return gifts and competitor strategies. E-commerce Marketers:** To identify high-demand keywords for Furusato Nozei portals. Content Creators:** To find trending topics for blogs or social media regarding tax deductions. Market Researchers:** To monitor the seasonality and shifting interests in the Hometown Tax sector. ⚙️ How it works News Ingestion: The workflow triggers on a schedule and fetches the latest "Furusato Nozei" articles from Google News via RSS. AI Analysis & Extraction: An AI Agent (using OpenRouter) summarizes the news cluster and identifies the most viable search keyword (e.g., "Scallops," "Travel Vouchers," or specific municipalities). Data Validation: The workflow queries the Google Trends API (via SerpApi) to retrieve search volume history for the extracted keyword in Japan. Strategic Reporting: A second AI Agent analyzes the search trend data alongside the keyword to generate a market insight report. Delivery: The final report is formatted and sent directly to a Slack channel. 🛠️ Requirements To use this workflow, you will need: n8n** (Version 1.0 or later recommended). OpenRouter API Key** (or you can swap the model nodes for OpenAI/Anthropic). SerpApi Key** (Required to fetch Google Trends data programmatically). Slack Account** (with permissions to post to a channel). 🚀 How to set up Configure Credentials: Add your OpenRouter API key to the Chat Model nodes. Add your SerpApi key to the Google Trends API node. Connect your Slack account in the Send a message node. Check the RSS Feed: The RSS Read node is pre-configured for "Furusato Nozei" (ふるさと納税). You can leave this as is. Regional Settings: The workflow is pre-set for Japan (jp / ja). If you need to change this, check the Workflow Configuration and Google Trends API nodes. Schedule: Enable the Schedule Trigger node to run at your preferred time (default is 9:00 AM JST). 🎨 How to customize Change the Topic:** While this is optimized for Furusato Nozei, you can change the RSS feed URL to track other Japanese market trends (e.g., NISA, Inbound Tourism). Swap AI Models:** The template uses OpenRouter, but you can easily replace the "Chat Model" nodes with OpenAI (GPT-4) or Anthropic (Claude) depending on your preference. Adjust AI Prompts:** The AI prompts are currently in Japanese to match the content. You can modify the system instructions in the AI Agent nodes if you prefer English reports.
by Tristan V
Who is this for? Businesses and developers who want to automate customer support or engagement on Facebook Messenger using AI-powered responses. What does it do? Creates an intelligent Facebook Messenger chatbot that: Responds to messages using OpenAI (gpt-4o-mini) Batches rapid-fire messages into a single AI request Maintains conversation history (50 messages per user) Shows professional UX feedback (seen indicators, typing bubbles) How it works Webhook Verification - Handles Facebook's GET verification request Message Reception - Receives incoming messages via POST webhook Message Batching - Waits 3 seconds to collect multiple quick messages AI Processing - Sends combined message to OpenAI with conversation context Response Delivery - Formats and sends the AI response back to Messenger Setup Configure Facebook Graph API credential with your Page Access Token Configure OpenAI API credential with your API key Set your verify token in the "Is Token Valid?" node Register the webhook URL in Facebook Developer Console Key Features Message Batching: Combines "Hey" + "Can you help" + "with my order?" into one request Conversation Memory: Remembers context from previous messages Echo Filtering: Prevents responding to your own messages Response Formatting: Cleans markdown for Messenger's 2000-char limit
by Takumi Oku
Who is this for Space Enthusiasts & Music Lovers**: Discover new music paired with stunning cosmic visuals. Community Managers**: specific Slack channels with engaging, creative daily content. n8n Learners**: Learn how to chain Image Analysis (Vision), Logic, and API integrations (Spotify/Slack). How it works Schedule: The workflow runs every night at 10 PM. Mood Logic: It checks the day of the week to adjust the energy level (e.g., higher energy for Friday nights, calmer vibes for Mondays). Visual Analysis: OpenAI (GPT-4o) analyzes the NASA APOD image to determine its color palette, mood, and subject matter, converting these into musical parameters (Valence, Energy). Curation: Spotify searches for a track that matches these specific parameters. Creative Writing: OpenAI generates a short poem or caption linking the image to the song. Delivery: The image, track link, and poem are posted to Slack, and the track is automatically saved to a designated Spotify Playlist. Requirements NASA API Key** (Free) OpenAI API Key** (Must have access to GPT-4o model) Spotify Developer Credentials** (Client ID and Client Secret) Slack** Workspace and Bot Token How to set up Set up your credentials for NASA, OpenAI, Spotify, and Slack in n8n. Create a specific Playlist in Spotify and copy its Playlist ID. Copy the Channel ID from the Slack channel where you want to post. Paste these IDs into the respective nodes (marked with <PLACEHOLDER>) or use the Set Fields node to manage them globally.
by PDF Vector
Overview Researchers and academic institutions need efficient ways to process and analyze large volumes of research papers and academic documents, including scanned PDFs and image-based materials (JPG, PNG). Manual review of academic literature is time-consuming and makes it difficult to identify trends, track citations, and synthesize findings across multiple papers. This workflow automates the extraction and analysis of research papers and scanned documents using OCR technology, creating a searchable knowledge base of academic insights from both digital and image-based sources. What You Can Do Extract key information from research papers automatically, including methodologies, findings, and citations Build a searchable database of academic insights from both digital and image-based sources Track citations and identify research trends across multiple papers Synthesize findings from large volumes of academic literature efficiently Who It's For Research institutions, university libraries, R&D departments, academic researchers, literature review teams, and organizations tracking scientific developments in their field. The Problem It Solves Literature reviews require reading hundreds of papers to identify relevant findings and methodologies. This template automates the extraction of key information from research papers, including methodologies, findings, and citations. It builds a searchable database that helps researchers quickly find relevant studies and identify research gaps. Setup Instructions: Install the PDF Vector community node with academic features Configure PDF Vector API with academic search enabled Configure Google Drive credentials for document access Set up database for storing extracted research data Configure citation tracking preferences Set up automated paper ingestion from sources Configure summary generation parameters Key Features: Google Drive integration for research paper retrieval (PDFs, JPGs, PNGs) OCR processing for scanned documents and images Automatic extraction of paper metadata and structure from any format Methodology and findings summarization from PDFs and images Citation network analysis and metrics Multi-paper trend identification Searchable research database creation Integration with academic search engines Customization Options: Add field-specific extraction templates Configure automated paper discovery from arXiv, PubMed, etc. Implement citation alert systems Create research trend visualizations Add collaboration features for research teams Build API endpoints for research queries Integrate with reference management tools Implementation Details: The workflow uses PDF Vector's academic features to understand research paper structure and extract meaningful insights. It processes papers from various sources, identifies key contributions, and creates structured summaries. The system tracks citations to measure impact and identifies emerging research trends by analyzing multiple papers in a field. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Naveen Choudhary
Automatically gather hundreds of real customer reviews from five major platforms in one run using Thordata API and Proxy — Trustpilot, Capterra, Chrome Web Store, TrustRadius, and Product Hunt — then let GPT-4.1 perform deep collective sentiment analysis, uncover common praises & complaints, flag critical issues, assess churn risk, and deliver actionable recommendations straight to your inbox as a stunning executive HTML report. Who’s it for Product managers & founders Growth and marketing teams Customer success & support leads Agencies delivering competitor or product review reports How it works Submit product URLs via form, webhook, or use defaults Smart, Cloudflare-safe scraping with automatic pagination Universal parser standardizes every review format Global deduplication using deterministic unique IDs GPT-4.1 analyzes all reviews collectively (not one-by-one) Beautiful responsive HTML email with sentiment badges, stats, and recommendations Requirements Thordata API key (free tier works) → set as HTTP Header Auth credential OpenAI API key Gmail account (or replace with any email node) How to set up Add your Thordata and OpenAI credentials Connect Gmail Click “Execute Workflow” – instantly tests with Thordata’s own reviews How to customize Edit default product in “Prepare Review Sources” node Modify the AI prompt or email design anytime Add more sources or change the output format easily Zero browser automation · Rate-limit safe · Fully deduplicated · Plug-and-play in minutes.
by IranServer.com
Automated Blog Content Generation from Google Trends to WordPress This n8n workflow automatically generates SEO-friendly blog content based on trending topics from Google Trends and publishes it to WordPress. Perfect for content creators, bloggers, and digital marketers who want to stay on top of trending topics with minimal manual effort. Who's it for Content creators** who need fresh, trending topic ideas Bloggers** looking to automate their content pipeline Digital marketers** wanting to capitalize on trending searches WordPress site owners** seeking automated content generation SEO professionals** who want to target trending keywords How it works The workflow operates on a scheduled basis (daily at 8:45 PM by default) and follows this process: Trend Discovery: Fetches the latest trending searches from Google Trends for a specific country (Iran by default) Content Research: Performs Google searches on the top 3 trending topics to gather detailed information AI Content Generation: Uses OpenAI's GPT-4o model to create SEO-friendly blog posts based on the trending topics and search results Structured Output: Ensures the generated content has proper title and content structure Auto-Publishing: Automatically creates draft posts in WordPress The AI is specifically prompted to create engaging, SEO-optimized content without revealing the automated sources, ensuring natural-sounding blog posts. How to set up Install required community node: n8n-nodes-serpapi for Google Trends and Search functionality Configure credentials: SerpApi: Sign up at serpapi.com and add your API key OpenAI: Add your OpenAI API key for GPT-4o access WordPress: Configure your WordPress site credentials Customize the country code: Change the "Country" field in the "Edit Fields" node (currently set to "IR" for Iran) Adjust the schedule: Modify the "Schedule Trigger" to run at your preferred time Test the workflow: Run it manually first to ensure all connections work properly Requirements SerpApi account** (for Google Trends and Search data) OpenAI API access** (for content generation using GPT-4o) WordPress site** with API access enabled How to customize the workflow Change target country**: Modify the country code in the "Edit Fields" node to target different regions Adjust content quantity**: Change the limit in the "Limit" node to process more or fewer trending topics Modify AI prompt**: Edit the prompt in the "Basic LLM Chain" node to change writing style or focus Schedule frequency**: Adjust the "Schedule Trigger" for different posting frequencies Content status**: Change from "draft" to "publish" in the WordPress node for immediate publishing Add content filtering**: Insert additional nodes to filter topics by category or keywords
by AI/ML API | D1m7asis
📲 AI Multi-Model Telegram Chatbot (n8n + AIMLAPI) This n8n workflow enables Telegram users to interact with multiple AI models dynamically using #model_id commands. It also supports a /models command to list all available models. Each user has a daily usage limit, tracked via Google Sheets. 🚀 Key Features Dynamic Model Selection:** Users choose models on-the-fly via #model_id (e.g., #openai/gpt-4o). /models Command:** Lists all available models grouped by provider. Daily Limit Per User:** Enforced using Google Sheets. Prompt Parsing:** Extracts model and message from user input. Logging:** Logs every request & result into Google Sheets for usage tracking. Seamless Telegram Delivery:** Responses are sent directly back to the chat. 🛠 Setup Guide 1. 📲 Create a Telegram Bot Go to @BotFather Use /newbot → Set name & username. Copy the generated API token. 2. 🔐 Add Telegram Credentials to n8n Go to n8n > Credentials > Telegram API. Create a new credential with the BotFather token. 3. 📗 Google Sheets Setup Create a Google Sheet named Sheet1. Add columns: user_id | date | query | result Share the sheet with your Service Account or OAuth Email (depending on auth method). 4. 🔌 Connect AIMLAPI Get your API key from AIMLAPI. In n8n > Credentials, add AI/ML API: API Key: your_key_here. 5. ⚙️ Customize Limits & Enhancements Adjust daily limits in the Set Daily Limit node. Optional: Add NSFW content filtering. Implement alias commands. Extend with /help, /usage, /history. Add inline button UX (advanced). 💡 How It Works ➡️ Command Examples: Start a chat with a specific model: #openai/gpt-4o Write a motivational quote. Request available models list: /models ➡️ Workflow Logic: Receives a Telegram message. Switch node checks if the message is /models or a prompt. For /models, it fetches and sends a grouped list of models. For prompts: Checks usage limits. Parses #model_id and prompt text. Dynamically routes the request to the chosen model. Sends the AI's response back to the user. Logs the query & result to Google Sheets. If daily limit exceeded → sends a limit exceeded message. 🧪 Testing & Debugging Tips Test via a separate Telegram chat. Use Console/Set nodes to debug payloads. Always test commands by messaging the bot (not via "Execute Node"). Validate cases: Missing #model_id. Invalid model_id. Limit exceeded handling.
by Supira Inc.
💡 How It Works This workflow automatically detects new YouTube uploads, retrieves their transcripts, summarizes them in Japanese using GPT-4 o mini, and posts the results to a selected Slack channel. It’s ideal for teams who follow multiple creators, internal training playlists, or corporate webinars and want concise Japanese summaries in Slack without manual work. Here’s the flow at a glance: YouTube RSS Trigger — monitors a specific channel’s RSS feed. HTTP Request via RapidAPI — fetches the video transcript (supports both English & Japanese). Code Node — merges segmented transcript text into one clean string. OpenAI (GPT-4o-mini) — generates a natural-sounding, 3-line Japanese summary. Slack Message — posts the title, link, and generated summary to #youtube-summary. ⚙️ Requirements n8n (v1.60 or later) RapidAPI account + [youtube-transcript3 API key] OpenAI API key (GPT-4o-mini recommended) Slack workspace with OAuth connection 🧩 Setup Instructions 1.Replace YOUR_RAPIDAPI_KEY_HERE with your own RapidAPI key. 2.Add your OpenAI Credential under Credentials → OpenAI. 3.Set your target Slack channel (e.g., #youtube-summary). 4.Enter the YouTube channel ID in the RSS Trigger node. 5.Activate the workflow and test with a recent video. 🎛️ Customization Tips Modify the OpenAI prompt to change summary length or tone. Duplicate the RSS Trigger for multiple channels → merge before summarization. Localize Slack messages using Japanese or English templates. 🚀 Use Case Perfect for marketing teams, content curators, and knowledge managers who want to stay updated on YouTube content in Japanese without leaving Slack.
by Robert Breen
Automatically research new leads in your target area, structure the results with AI, and append them into Google Sheets — all orchestrated in n8n. ✅ What this template does Uses Perplexity to research businesses (coffee shops in this example) with company name + email Cleans and structures the output into proper JSON using OpenAI Appends the new leads directly into Google Sheets, skipping duplicates > Trigger: Manual — “Start Workflow” 👤 Who’s it for Sales & marketing teams** who need to prospect local businesses Agencies** running outreach campaigns Freelancers** and consultants looking to automate lead research ⚙️ How it works Set Location → define your target area (e.g., Hershey PA) Get Current Leads → pull existing data from your Google Sheet to avoid duplicates Research Leads → query Perplexity for 20 businesses, excluding already-scraped ones Write JSON → OpenAI converts Perplexity output into structured Company/Email arrays Split & Merge → align Companies with Emails row-by-row Send Leads to Google Sheets → append or update leads in your sheet 🛠️ Setup instructions Follow these sticky-note setup steps (already included in the workflow): 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) Sign in with your Google account and grant access In the Google Sheets node, select your Spreadsheet and Worksheet Example sheet: https://docs.google.com/spreadsheets/d/1MnaU8hSi8PleDNVcNnyJ5CgmDYJSUTsr7X5HIwa-MLk/edit#gid=0 2) Connect Perplexity (API Key) Sign in at https://www.perplexity.ai/account Generate an API key: https://docs.perplexity.ai/guides/getting-started In n8n → Credentials → New → Perplexity API, paste your key 3) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API Paste your OpenAI API key In the OpenAI Chat Model node, select your credential and a vision-capable model (e.g., gpt-4o-mini, gpt-4o) 🔧 Requirements A free Google account An OpenAI API key (https://platform.openai.com) A Perplexity API key (https://docs.perplexity.ai) n8n self-hosted or cloud instance 🎨 How to customize Change the Search Area in the Set Location node Modify the Perplexity system prompt to target different business types (e.g., gyms, salons, restaurants) Expand the Google Sheet schema to include more fields (phone, website, etc.) 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your Google Sheet)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Robert Breen
🧑💻 Description This workflow integrates Slack with an OpenAI Chat Agent to create a fully interactive chatbot inside your Slack workspace. It works in a bidirectional loop: A user sends a message in Slack. The workflow captures the message and logs it back into Slack (so you can monitor what’s being passed into the agent). The message is sent to an OpenAI-powered agent (e.g., GPT-4o). The agent generates a response. The response is formatted and posted back to Slack in the same channel or DM thread. This allows you to monitor, test, and interact with the agent directly from Slack. 📌 Use Cases Team Support Bot**: Provide quick AI-generated answers to FAQs in Slack. E-commerce Example**: The default prompt makes the bot act like a store assistant, but you can swap in your own domain knowledge. Conversation Monitoring**: Log both user and agent messages in Slack for visibility and review. Custom AI Agents**: Extend with RAG, external APIs, or workflow automations for specialized tasks. ⚙️ Setup Instructions 1️⃣ OpenAI Setup Sign up at OpenAI. Generate an API key from the API Keys page. In n8n → Credentials → New → OpenAI → paste your key and save. In the OpenAI Chat node, select your credential and configure the system prompt. Example included: “You are an ecommerce bot. Help the user as if you were working for a mock store.” You can edit this prompt to fit your use case (support bot, HR assistant, knowledge retriever, etc.). 2️⃣ Slack Setup Go to Slack API Apps → click Create New App. Under OAuth & Permissions, add the following scopes: Read: channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, users:read. Write: chat:write. Install the app to your workspace → copy the Bot User OAuth Token. In n8n → Credentials → New → Slack OAuth2 API → paste the token and save. In the Slack nodes (e.g., Send User Message in Slack, Send Agent’s Response in Slack), select your credential and specify the Channel ID or User ID to send/receive messages. 🎛️ Customization Guidance Change Agent Behavior: Update the system message in the **Chat Agent node. Filter Channels**: Limit listening to a specific channel by adjusting the Slack node’s Channel ID. Format Responses: The **Format Response node shows how to structure agent replies before posting back to Slack. Extend Workflows**: Add integrations with databases, CRMs, or APIs for dynamic data-driven responses. 🔄 Workflow Flow (Simplified) Slack User Message → Send User Message in Slack → Chat Agent → Format Response → Send Agent Response in Slack 📬 Contact Need help customizing this workflow (e.g., multi-channel listening, advanced AI logic, or external integrations)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Meelioo
How it works This workflow creates an intelligent document assistant called "Mookie" that can answer questions based on your uploaded documents. Here's how it operates: Document Ingestion:** The system can automatically load PDF files from Google Drive or accept PDFs uploaded directly through Telegram, then processes and stores them in a PostgreSQL vector database using Mistral embeddings Smart Retrieval:** When users ask questions via Telegram or a web chat interface, the AI agent searches through the stored documents to find relevant information using vector similarity matching Contextual Responses:** Using GPT-4 and the retrieved document context, Mookie provides accurate answers based solely on the ingested documents, avoiding hallucination by refusing to answer questions not covered in the stored materials Memory & Conversation:** The system maintains conversation history for each user, allowing for natural follow-up questions and contextual discussions Set up steps Estimated setup time: 30-45 minutes You'll need to configure several external services and credentials: Set up a PostgreSQL database with PGVector extension for document storage Create accounts and API keys for Azure OpenAI (GPT-4), Mistral Cloud (embeddings), and Google Drive access Connect your own LLM's if you don't have these credentials. Configure a Telegram bot and obtain API credentials for chat functionality Update webhook URLs throughout the workflow to match your n8n instance Test the document ingestion pipeline with sample PDFs Verify the chat interfaces (both Telegram and web) are responding correctly >The workflow includes approval mechanisms for PDF ingestion and handles both automated bulk processing from Google Drive and real-time document uploads through Telegram. Read the sticky notes provided in the template code for clear instructions.