by isaWOW
Description Automatically guide students through personalized learning sessions using AI-powered intent classification, book-based knowledge retrieval (RAG), and full session logging — all without writing a single line of code. What this workflow does Nathan is an intelligent AI tutor chatbot that detects what the student is trying to do — greeting, asking a question, answering, introducing a topic, or going off-topic — and responds accordingly using the right teaching strategy every time. Key features: Smart intent classification* using DeepSeek LLM — identifies 5 intents: *greeting, topic, answer, question, random Dynamic system prompts** — fetches the right teaching prompt from Google Sheets based on the classified intent RAG-powered answers** — GPT-4o-mini retrieves relevant content from a Pinecone book vector store to give accurate, book-grounded responses Full session memory** — maintains conversation context with a sliding-window buffer for natural multi-turn dialogue Automatic Q\&A logging** — every exchange is saved to Google Sheets for teacher review and auditing Public chat interface** — students can access Nathan directly via browser without any login How it works Step 1 — User sends a message The public chat trigger receives the student's input and simultaneously fires two paths: one to the Intent Classifier and one to the Merge node for later combining. Step 2 — DeepSeek classifies intent The Intent Classifier agent (powered by DeepSeek LLM + sliding-window memory) reads the message and outputs exactly one word: greeting / topic / answer / question / random Step 3 — Fetch matching system prompt The classified intent is used to look up the correct system prompt from the pmt tab in Google Sheets (filtered by the Output column). Each intent maps to a unique pedagogical strategy. Step 4 — Merge and aggregate The fetched prompt and original user input are merged and aggregated into a single payload, ready to be passed to the AI teacher. Step 5 — AI Teacher Agent generates response (RAG) GPT-4o-mini receives the system prompt, full session memory (last 10 turns), and uses the Pinecone book vector store as a retrieval tool (topK=3). It generates a pedagogically appropriate, book-grounded response. Step 6 — Log and return The session ID, user message, and AI response are appended to the Preservation tab in Google Sheets. The final response is formatted and returned to the student in the chat interface. Setup requirements Services you'll need: n8n instance (self-hosted or n8n Cloud) DeepSeek API account OpenAI API account (for GPT-4o-mini and embeddings) Pinecone account (free tier works) Google account with Sheets access Estimated setup time: 20–30 minutes Step-by-step setup 1. Add credentials in n8n Go to Settings → Credentials and add: DeepSeek API key → connect to the DeepSeek LLM node OpenAI API key → connect to both GPT-4o-mini LLM and OpenAI Embeddings nodes Google Sheets OAuth2 → connect to Fetch System Prompt and Log Q\&A to Sheets nodes Pinecone API key → connect to the Book Knowledge Base node 2. Set up your Google Sheet Create a new Google Sheet with two tabs: Tab 1 — pmt (prompt library): | Output | Prompt | | --------- | ---------------------------------- | | greeting | You are Nathan, a friendly tutor… | | topic | The student wants to learn about… | | answer | The student has just answered… | | question | The student is asking a question… | | random | Gently redirect the conversation… | Tab 2 — Preservation (Q\&A log): | Session | User Msg | AI Response | | ------- | -------- | ----------- | | (auto-filled) | (auto-filled) | (auto-filled) | 3. Replace placeholder Sheet IDs In the workflow, find all nodes that contain YOUR_GOOGLE_SHEET_ID and replace with your actual Google Sheets document ID (found in the URL: docs.google.com/spreadsheets/d/YOUR_ID_HERE). Also replace YOUR_PRESERVATION_SHEET_GID with the gid value of your Preservation tab. 4. Set up Pinecone vector store Create a Pinecone index named rag-vector-db-book-quiz (or rename the node to match your own index) Upload and embed your book/study material using the OpenAI Embeddings model The node is pre-configured to retrieve topK=3 chunks per query 5. Activate and test Enable the workflow, open the public chat URL, and send a test message like: "Hi, I want to learn about Newton's laws" You should see: Intent classified as topic Matching prompt fetched from Sheets GPT-4o-mini respond with a book-grounded explanation A new row added to the Preservation sheet Customization options Swap the classifier model Replace DeepSeek with any chat LLM (GPT-4o-mini, Gemini, Claude) in the Intent Classifier node — the prompt is model-agnostic. Add new intent categories Add a new row to the pmt tab with your intent keyword and system prompt. Then extend the classifier's system message to recognize the new category. Adjust retrieval depth Change topK in the Pinecone node from 3 to 5 or 10 to retrieve more book chunks per query — useful for complex or detailed topics. Change memory length The Classifier Memory and Teacher Session Memory both use a context window of 10 turns. Lower this for faster responses, increase it for longer multi-turn sessions. Embed multiple books Upload chapters from different books into the same Pinecone index. The agent automatically retrieves the most relevant chunks regardless of source. Troubleshooting Intent classifier returns unexpected output Make sure the DeepSeek system prompt ends with the exact instruction: "Respond with only one word from this list." Any extra text in the response will break the Google Sheets lookup. Google Sheets lookup returns no results Verify that the Output column in the pmt tab contains lowercase values matching exactly what DeepSeek outputs (e.g., greeting not Greeting). Pinecone returns empty results Confirm your index name matches the one in the node, your documents are embedded using the same OpenAI Embeddings model, and the index is not empty. Session memory not persisting across turns The session key is pulled from $('Chat Trigger').item.json.sessionId. Ensure you are using the same browser session (do not open in incognito) and the chat trigger is set to public mode. Preservation sheet not updating Check that the Preservation tab GID is correct and that your Google Sheets OAuth credentials have edit access to the document. Support Need help setting this up or want a custom version for your school or platform? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Moe Ahad
How it works User enters name of a city for which most current weather information will be gathered Custom Python code processes the weather data and generates a custom email about the weather AI agent further customizes the email and add a related joke about the weather Recipient gets the custom email for the city Set up instructions Enter city to get the weather data Add OpenWeather API and replace <your_API_key> with your actual API key Add your OpenAI API in OpenAI Chat Model Node Add your Gmail credentials and specify a recipient for the custom email
by Supira Inc.
💡 How It Works This workflow automatically detects new YouTube uploads, retrieves their transcripts, summarizes them in Japanese using GPT-4 o mini, and posts the results to a selected Slack channel. It’s ideal for teams who follow multiple creators, internal training playlists, or corporate webinars and want concise Japanese summaries in Slack without manual work. Here’s the flow at a glance: YouTube RSS Trigger — monitors a specific channel’s RSS feed. HTTP Request via RapidAPI — fetches the video transcript (supports both English & Japanese). Code Node — merges segmented transcript text into one clean string. OpenAI (GPT-4o-mini) — generates a natural-sounding, 3-line Japanese summary. Slack Message — posts the title, link, and generated summary to #youtube-summary. ⚙️ Requirements n8n (v1.60 or later) RapidAPI account + [youtube-transcript3 API key] OpenAI API key (GPT-4o-mini recommended) Slack workspace with OAuth connection 🧩 Setup Instructions 1.Replace YOUR_RAPIDAPI_KEY_HERE with your own RapidAPI key. 2.Add your OpenAI Credential under Credentials → OpenAI. 3.Set your target Slack channel (e.g., #youtube-summary). 4.Enter the YouTube channel ID in the RSS Trigger node. 5.Activate the workflow and test with a recent video. 🎛️ Customization Tips Modify the OpenAI prompt to change summary length or tone. Duplicate the RSS Trigger for multiple channels → merge before summarization. Localize Slack messages using Japanese or English templates. 🚀 Use Case Perfect for marketing teams, content curators, and knowledge managers who want to stay updated on YouTube content in Japanese without leaving Slack.
by 荒城直也
Weather Monitoring Across Multiple Cities with OpenWeatherMap, GPT-4o-mini, and Discord This workflow provides an automated, intelligent solution for global weather monitoring. It goes beyond simple data fetching by calculating a custom "Comfort Index" and using AI to provide human-like briefings and activity recommendations. Whether you are managing remote teams or planning travel, this template centralizes complex environmental data into actionable insights. Who’s it for Remote Team Leads:** Keep an eye on environmental conditions for team members across different time zones. Frequent Travelers & Event Planners:** Monitor weather risks and comfort levels for multiple destinations simultaneously. Smart Home/Life Enthusiasts:** Receive daily morning briefings on air quality and weather alerts directly in Discord. How it works Schedule Trigger: The workflow runs every 6 hours (customizable) to ensure data is up to date. Data Collection: It loops through a list of cities, fetching current weather, 5-day forecasts, and Air Quality Index (AQI) data via the OpenWeatherMap node and HTTP Request node. Smart Processing: A Code node calculates a "Comfort Index" (based on temperature and humidity) and flags specific alerts (e.g., extreme heat, high winds, or poor AQI). AI Analysis: The OpenAI node (using GPT-4o-mini) analyzes the aggregated data to compare cities and recommend the best location for outdoor activities. Conditional Routing: An If node checks for active weather alerts. Urgent alerts are routed to a specific Discord notification, while routine briefings are sent normally. Archiving: All processed data is appended to Google Sheets for historical tracking and future analysis. How to set up Credentials: Connect your OpenWeatherMap, OpenAI, Discord (Webhook), and Google Sheets accounts. Locations: Open the 'Set Monitoring Locations' node and edit the JSON array with the cities, latitudes, and longitudes you wish to track. Google Sheets: Configure the 'Log to Google Sheets' node with your specific Spreadsheet ID and Sheet Name. Discord: Ensure your Webhook URL is correctly pasted into the Discord nodes. Requirements OpenWeatherMap API Key** (Free tier is sufficient). OpenAI API Key** (Configured for GPT-4o-mini). Discord Webhook URL**. Google Sheet** with headers ready for logging. How to customize Adjust Alert Thresholds:** Modify the logic in the 'Process and Analyze Data' Code node to change what triggers a "High Wind" or "Extreme Heat" alert. Refine AI Persona:** Edit the System Prompt in the 'AI Weather Analysis' node to change the tone or focus of the weather briefing. Change Frequency:** Adjust the Schedule Trigger to run once a day or every hour depending on your needs.
by Maksudur Rahman
Who’s it for This workflow is for content creators, newsletter editors, and AI enthusiasts who want to automate the heavy lifting of news gathering. It acts as an autonomous research agent that monitors industry sources and drafts high-quality summaries for your review. How it works This workflow serves as a "human-in-the-loop" publishing agent: Ingestion & Normalization: It monitors 15+ sources including RSS feeds (TechCrunch), Reddit (r/OpenAI), and company blogs (Anthropic, Google). It normalizes these diverse inputs into a standard format. Filtering & Curating: Using OpenAI (GPT-4o), it filters out noise to identify only high-impact stories. It then selects the top 4 stories based on relevance to a tech-savvy audience. Drafting: It writes a complete newsletter, including a catchy subject line, an intro hook, deep-dive summaries, and a "quick hits" list. It even generates viral short-form video scripts based on the news. Slack Approval: The draft is sent to Slack. You can approve it immediately or reply with feedback (e.g., "Make the tone punchier"), prompting the AI to revise the draft before generating the final file. How to set up Credentials: Connect your OpenAI, Anthropic, Google Sheets, and Slack accounts in n8n. Google Sheets: Create a sheet with columns for Title, URL, Source, Published, and Content. Paste the Sheet ID into the "Log to Google Sheets" and "Get_Stories" nodes. Slack: Update the Slack nodes with your specific Channel ID where you want to receive drafts. Requirements n8n version:** 1.0+ (requires LangChain nodes). LLM API Keys:** OpenAI and Anthropic. Google Sheets:** For logging processed history. Slack:** For the approval interface. How to customize Change Sources:** Edit the RSS Trigger nodes to track Finance, SaaS, or Crypto news instead of AI. Adjust Tone:** Open the stories_prompt node to change the persona of the AI editor (e.g., from "Professional" to "Witty"). Publishing:** Connect the final output to a CMS like WordPress or Ghost to publish automatically upon approval.
by Robert Breen
🧑💻 Description This workflow integrates Slack with an OpenAI Chat Agent to create a fully interactive chatbot inside your Slack workspace. It works in a bidirectional loop: A user sends a message in Slack. The workflow captures the message and logs it back into Slack (so you can monitor what’s being passed into the agent). The message is sent to an OpenAI-powered agent (e.g., GPT-4o). The agent generates a response. The response is formatted and posted back to Slack in the same channel or DM thread. This allows you to monitor, test, and interact with the agent directly from Slack. 📌 Use Cases Team Support Bot**: Provide quick AI-generated answers to FAQs in Slack. E-commerce Example**: The default prompt makes the bot act like a store assistant, but you can swap in your own domain knowledge. Conversation Monitoring**: Log both user and agent messages in Slack for visibility and review. Custom AI Agents**: Extend with RAG, external APIs, or workflow automations for specialized tasks. ⚙️ Setup Instructions 1️⃣ OpenAI Setup Sign up at OpenAI. Generate an API key from the API Keys page. In n8n → Credentials → New → OpenAI → paste your key and save. In the OpenAI Chat node, select your credential and configure the system prompt. Example included: “You are an ecommerce bot. Help the user as if you were working for a mock store.” You can edit this prompt to fit your use case (support bot, HR assistant, knowledge retriever, etc.). 2️⃣ Slack Setup Go to Slack API Apps → click Create New App. Under OAuth & Permissions, add the following scopes: Read: channels:history, groups:history, im:history, mpim:history, channels:read, groups:read, users:read. Write: chat:write. Install the app to your workspace → copy the Bot User OAuth Token. In n8n → Credentials → New → Slack OAuth2 API → paste the token and save. In the Slack nodes (e.g., Send User Message in Slack, Send Agent’s Response in Slack), select your credential and specify the Channel ID or User ID to send/receive messages. 🎛️ Customization Guidance Change Agent Behavior: Update the system message in the **Chat Agent node. Filter Channels**: Limit listening to a specific channel by adjusting the Slack node’s Channel ID. Format Responses: The **Format Response node shows how to structure agent replies before posting back to Slack. Extend Workflows**: Add integrations with databases, CRMs, or APIs for dynamic data-driven responses. 🔄 Workflow Flow (Simplified) Slack User Message → Send User Message in Slack → Chat Agent → Format Response → Send Agent Response in Slack 📬 Contact Need help customizing this workflow (e.g., multi-channel listening, advanced AI logic, or external integrations)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Anna Bui
🎥 AI Content Generator: Transcript to Video & Image Transform meeting transcripts into engaging multi-format content with AI-powered automation Perfect for educators, consultants, and content creators who record sessions and want to repurpose them into social media posts, videos, and images without manual work. How it works Chat interface triggers the AI orchestrator when you request content creation Fetches your most recent meeting transcript from Fathom AI analyzes the transcript and extracts key insights and breakthrough moments Generates written post content and creates a Google Doc automatically Creates detailed video generation prompts and sends to video API (Luma/Runway) Generates image prompts and creates social media graphics via DALL-E Returns all assets: written content, video URL, and image file ready to use How to use Connect your Fathom account to retrieve meeting transcripts Set up the three required subworkflows: Text to Video, Text to Image, and Transcript to Content Configure your OpenAI credentials for AI processing Simply chat: "Create content from my latest session - video and image" Review and customize the generated content as needed Requirements Fathom account with recorded meetings or sessions OpenAI API account (GPT-4 recommended for best results) Google Docs access for content storage Video generation API (Luma AI or Runway ML) for video creation Three subworkflows must be created separately (see setup notes) Good to know Video generation typically costs $0.50-$2.00 per video depending on your provider The workflow processes the most recent 7 days of Fathom transcripts automatically AI agents use ~5,000-10,000 tokens per complete content generation Subworkflows need to be set up once before using this main workflow Videos take 2-5 minutes to generate after the prompt is created Need Help? Join the Discord or ask in the Forum! Happy Creating! 🚀
by Meelioo
How it works This workflow creates an intelligent document assistant called "Mookie" that can answer questions based on your uploaded documents. Here's how it operates: Document Ingestion:** The system can automatically load PDF files from Google Drive or accept PDFs uploaded directly through Telegram, then processes and stores them in a PostgreSQL vector database using Mistral embeddings Smart Retrieval:** When users ask questions via Telegram or a web chat interface, the AI agent searches through the stored documents to find relevant information using vector similarity matching Contextual Responses:** Using GPT-4 and the retrieved document context, Mookie provides accurate answers based solely on the ingested documents, avoiding hallucination by refusing to answer questions not covered in the stored materials Memory & Conversation:** The system maintains conversation history for each user, allowing for natural follow-up questions and contextual discussions Set up steps Estimated setup time: 30-45 minutes You'll need to configure several external services and credentials: Set up a PostgreSQL database with PGVector extension for document storage Create accounts and API keys for Azure OpenAI (GPT-4), Mistral Cloud (embeddings), and Google Drive access Connect your own LLM's if you don't have these credentials. Configure a Telegram bot and obtain API credentials for chat functionality Update webhook URLs throughout the workflow to match your n8n instance Test the document ingestion pipeline with sample PDFs Verify the chat interfaces (both Telegram and web) are responding correctly >The workflow includes approval mechanisms for PDF ingestion and handles both automated bulk processing from Google Drive and real-time document uploads through Telegram. Read the sticky notes provided in the template code for clear instructions.
by Cheng Siong Chin
How It Works This workflow automates comprehensive enterprise risk assessment and mitigation planning for organizations managing complex operational, financial, and compliance risks. Designed for risk managers, internal audit teams, and executive leadership, it solves the challenge of continuously evaluating multi-dimensional risks, validating threat severity, and coordinating appropriate mitigation strategies across diverse business functions. The system triggers on-demand or scheduled assessments, generates sample credential data for testing, deploys a Coordination Agent to orchestrate specialized risk evaluations through parallel AI agents (Credential Validation verifies identity risks, Credential Verification confirms data accuracy, Risk Assessment evaluates threat levels), routes findings by severity (critical/high/medium/low), and merges outputs into consolidated reports. By combining multi-agent risk analysis with intelligent prioritization and unified reporting, organizations achieve 360-degree risk visibility, reduce assessment cycles from weeks to hours, ensure consistent evaluation frameworks, and enable proactive mitigation before risks materialize into losses. Setup Steps Connect Manual Trigger for on-demand assessments or configure Schedule Trigger for routine evaluations Configure risk data sources Add AI model API keys to Coordination Agent and all specialized agents Define risk scoring criteria and severity thresholds in agent prompts aligned with company risk appetite Configure routing conditions for each risk level with appropriate handling workflows Set up reporting output format and distribution channels for consolidated risk reports Prerequisites Enterprise risk management system access, AI service accounts Use Cases Cybersecurity risk assessments, fraud risk evaluations, third-party vendor risk reviews Customization Modify agent prompts for industry-specific risk frameworks (NIST, ISO 31000, COSO) Benefits Reduces risk assessment time from weeks to hours, provides 360-degree risk visibility
by Olivier
This template qualifies and segments B2B prospects in ProspectPro using live web data and AI. It retrieves website content and search snippets, processes them with an LLM, and updates the prospect record in ProspectPro with qualification labels and tags. The workflow ensures each prospect is processed once and can be reused as a sub-flow or direct trigger. ✨ Features Automatically qualify B2B companies based on website and search content Flexible business logic: qualify and segment prospects by your own criteria Updates ProspectPro records with labels and tags Live data retrieval via Bedrijfsdata.nl RAG API nodes Easy customization through flexible AI setup Extendable and modular: use as a trigger workflow or callable sub-flow ⚙ Requirements n8n instance or cloud workspace Install the Bedrijfsdata.nl Verified Community Node Bedrijfsdata.nl developer account (14-day free trial, 500 credits) Install the ProspectPro Verified Community Node ProspectPro account & API credentials (14-day free trial) OpenAI API credentials (or another LLM) 🔧 Setup Instructions Import the template and set your credentials (Bedrijfsdata.nl, ProspectPro, OpenAI). Connect to a trigger (e.g., ProspectPro "New website visitor") or call as a sub-workflow. Adjust qualification logic in the Qualify & Tag Prospect node to match your ICP. Optional: extend tags, integrate with Slack/CRM, or add error logging. 🔐 Security Notes Prevents re-processing of the same prospect using tags Error branches included for invalid input or API failures LLM output validated via a structured parser 🧪 Testing Run with a ProspectPro ID of a company with a known domain Check execution history and ProspectPro for enrichment results Verify updated tags and qualification label in ProspectPro 📌 About Bedrijfsdata.nl Bedrijfsdata.nl operates the most comprehensive company database in the Netherlands. With real-time data on 3.7M+ businesses and AI-ready APIs, they help Dutch SMEs enrich CRM, workflows, and marketing automation. Website: https://www.bedrijfsdata.nl Developer Platform: https://developers.bedrijfsdata.nl API docs: docs.bedrijfsdata.nl Support: https://www.bedrijfsdata.nl/klantenservice Support hours: Monday–Friday, 09:00–17:00 CET 📌 About ProspectPro ProspectPro is a B2B Prospecting Platform for Dutch B2B SMEs. It helps sales teams identify prospects, identify website visitors and more. Website: https://www.prospectpro.nl Platform: https://mijn.prospectpro.nl API docs: https://www.docs.bedrijfsdata.nl Support: https://www.prospectpro.nl/klantenservice Support hours: Monday–Friday, 09:00–17:00 CET
by Robin
💬 Chat with Your Finances on Telegram Ask questions like “How much did I spend on food last month?” and get instant answers from your financial data — directly in Telegram. This workflow connects your Google Sheets expense log to an AI-powered query engine that understands natural language, resolves ambiguous categories and person names, and sends back a clean formatted summary in Telegram. No spreadsheets. No dashboards. Just chat with your financial data. ⚡ How it works Simply send a message like: How much did we spend on groceries last month? Show me a breakdown by category for this week. The workflow automatically: Parses your intent using GPT-4.1-nano Resolves categories and person names via mapping tables Filters and aggregates expense data from Google Sheets Returns a formatted summary directly in Telegram If a category or person is unknown, the workflow uses AI to suggest the closest match and asks the user to confirm via Telegram inline buttons. Confirmed aliases are saved automatically, making the system self-learning over time. ✨ Key Features Natural language queries** in English and German AI intent parsing** with GPT-4.1-nano (time range, person, category, filters) Self-learning entity resolution** for categories and names Interactive disambiguation** via Telegram inline buttons Relative date support** (this week, last month, this year) Group-by breakdowns** by category or person Shared expense filtering** via common_only flag Multi-user support** via Chat ID allowlist 📋 Requirements To run this workflow you need: Telegram Bot** (created via @BotFather) OpenAI API key** Four Google Sheets** Required sheets: expenses expense_categories categories_mapping person_mapping ⏱ Setup Time Approx. 15 minutes All required configuration steps are documented directly inside the workflow using "Action Required" notes in each workflow layer. Examples
by Jinash Rouniyar
PROBLEM Thousands of MCP Servers exist and many are updated daily, making server selection difficult for LLMs. Current approaches require manually downloading and configuring servers, limiting flexibility. When multiple servers are pre-configured, LLMs get overwhelmed and confused about which server to use for specific tasks. This template enables dynamic server selection from a live PulseMCP directory of 5000+ servers. How it works A user query goes to an LLM that decides whether to use MCP servers to fulfill a given query and provides reasoning for its decision. Next, we fetch MCP Servers from Pulse MCP API and format them as documents for reranking Now, we use Contextual AI's Reranker to score and rank all MCP Servers based on our query and instructions How to set up Sign up for a free trial of Contextual AI here to find CONTEXTUALAI_API_KEY. Click on variables option in left panel and add a new environment variable CONTEXTUALAI_API_KEY. For the baseline model, we have used GPT 4.1 mini, you can find your OpenAI API key here How to customize the workflow We use chat trigger to initate the workflow. Feel free to replace it with a webhook or other trigger as required. We use OpenAI's GPT 4.1 mini as the baseline model and reranker prompt generator. You can swap out this section to use the LLM of your choice. We fetch 5000 MCP Servers from the PulseMCP directory as a baseline number, feel free to adjust this parameter as required. We are using Contextual AI's ctxl-rerank-v2-instruct-multilingual reranker model, which can be swapped with any one of the following rerankers: 1) ctxl-rerank-v2-instruct-multilingual 2) ctxl-rerank-v2-instruct-multilingual-mini 3) ctxl-rerank-v1-instruct You can checkout this blog for more information about rerankers to learn more about them. Good to know: Contextual AI Reranker (with full MCP docs): ~$0.035/query Includes 0.035 for reranking + ~$0.0001 for OpenAI instruction generation. OpenAI Baseline: ~$0.017/query