by Guillaume Duvernay
Build a powerful AI chatbot that provides precise answers from your own company's knowledge base. This template provides a smart AI agent that connects to Lookio, a platform where you can easily upload your documents (from Notion, Jira, Slack, etc.) to create a dedicated knowledge source. What makes this agent "smart" is its efficiency. It's configured to handle simple greetings and small talk on its own, only using its powerful (and paid) knowledge retrieval tool when a user asks a genuine question. This cost-saving logic makes it perfect for building production-ready internal helpdesks, customer support bots, or any application where you need accurate, source-based answers. Who is this for? Customer support teams:** Build internal bots that help agents find answers instantly from your support documentation and knowledge bases. Product & engineering teams:** Create a chatbot that can answer technical questions based on your product documentation or internal wikis. HR departments:** Deploy an internal assistant that can answer employee questions based on company handbooks, policies, and procedures. Any business with a knowledge base:** Provide an interactive, conversational way for employees or customers to access information locked away in your documents. What problem does this solve? Provides accurate, grounded answers:** Ensures the AI agent's responses are based on your trusted, private documents, not the open internet, which prevents factual errors and "hallucinations." Makes your knowledge accessible:** Transforms your static documents and knowledge bases into an interactive, 24/7 conversational resource. Optimizes for cost and efficiency:** The agent is intelligent enough to handle simple small talk without making unnecessary API calls to your knowledge base, saving you credits and money. Simplifies RAG setup:** Provides a ready-to-use template for a common RAG (Retrieval-Augmented Generation) pattern, with the complexities of document management and retrieval handled by the Lookio platform. How it works First, build your knowledge base in Lookio: The process starts on the Lookio platform. You upload your documents (from Notion, Jira, PDFs, etc.) and create an "assistant" which becomes your secure, queryable knowledge base. A user asks a question: The n8n workflow begins when a user sends a message via the Chat Trigger. The agent makes a decision: The AI Knowledge Agent, guided by its system prompt, analyzes the user's message. If it's a simple greeting like "hi," it will respond directly. If it's a substantive question that requires specific knowledge, it decides to use its "Query knowledge base" tool. Query the Lookio knowledge base: The agent passes the user's question to the HTTP Request Tool. This tool securely calls the Lookio API with your specific Assistant ID and API key. Deliver the fact-based answer: Lookio searches your documents, synthesizes a precise answer, and sends it back to the workflow. The n8n agent then presents this answer to the user in the chat interface. Architectural Approaches to RAG in n8n with Lookio From a workflow perspective, integrating RAG natively in n8n involves orchestrating multiple nodes for data handling, embedding, and vector searches. This method provides high visibility and control over each step. An alternative architectural pattern is to use an external RAG service like Lookio, which consolidates these steps into a single HTTP Request node. This simplifies the workflow's structure by abstracting the multi-stage RAG process into one API endpoint. Setup Set up your Lookio assistant (Prerequisite): First, go to Lookio, sign up (you get 50 free credits), create an assistant with your documents, and from your settings, copy your API Key and Assistant ID. Configure the Lookio tool: In the Query knowledge base (HTTP Request Tool) node: Replace the <your-assistant-id> placeholder with your actual Assistant ID. Replace the <your-lookio-api-key> placeholder with your actual API Key. Connect your AI model: In the OpenAI Chat Model node, connect your AI provider credentials. Activate the workflow. Your smart knowledge base agent is now live and ready to chat! Taking it further Adjust retrieval quality:* In the *Query knowledge base** node, you can change the query_mode from flash (fastest) to deep for higher quality but slightly slower answers, depending on your needs. Add more tools:** Enhance your agent by giving it other tools, like a web search for when the internal knowledge base doesn't have an answer, or a calculator for performing computations. Deploy it anywhere:* Swap the *Chat Trigger* for a *Slack* or *Discord** trigger to deploy your agent right where your team works.
by Open Paws
Who is this for This workflow is designed for researchers, investigators, and analysts who need to: Build comprehensive profiles of organizations from public sources Research court cases, legislation, and government documents related to companies Verify company information across multiple authoritative databases Conduct due diligence or competitive intelligence research It's ideal for animal advocacy organizations researching factory farms and slaughterhouses, investigative journalists exposing animal cruelty, legal teams building cases against animal agriculture companies, and activists conducting corporate campaigns. What it does This multi-phase OSINT agent systematically researches organizations: Discovery phase: Searches multiple databases to find all relevant records CourtListener for federal and state court cases LegiScan for related legislation across all states DocumentCloud for government documents and reports Serper for web articles, news, and academic papers Verification phase: Confirms discovered records actually relate to the target company (not similar names) Prioritization phase: Scores and selects the most relevant items for deep analysis Retrieval phase: Fetches full text of selected court opinions, bills, and documents Analysis phase: Synthesizes findings into strategic insights Verification phase: Checks the final report against sources for accuracy The workflow prevents false positives by verifying company name matches, domain connections, and jurisdiction consistency. How to set up Import the workflow into your n8n instance Configure the required API credentials: CourtListener API for court case searches LegiScan API for legislation searches Serper API for web searches Jina AI API for article content extraction OpenRouter API for AI analysis Test with a well-known company to verify API connections Activate the workflow Example usage { "companyName": "Tyson Foods", "companyDomain": "tysonfoods.com", "reportGoal": "Identify environmental violations, labor disputes, and regulatory actions in the past 5 years" } Requirements CourtListener API key (free tier available) LegiScan API key Serper API key Jina AI API key OpenRouter API key How to customize Add data sources**: Integrate SEC filings, USDA inspection reports, EPA violations databases, or OSHA records Adjust verification criteria**: Modify the company matching logic for subsidiaries or DBAs (useful for tracking complex corporate structures in animal agriculture) Focus research scope**: Limit searches to specific jurisdictions or time periods relevant to your campaign Change output format**: Customize the final report structure for campaign briefings or legal filings Add export options**: Connect to document generation tools for formatted reports to share with coalition partners
by Kevin Meneses
What this workflow does This workflow automates end-to-end stock analysis using real market data and AI: Reads a list of stock tickers from Google Sheets Fetches fundamental data (valuation, growth, profitability) and OHLCV price data from EODHD APIs Computes key technical indicators (RSI, SMA 20/50/200, volatility, support & resistance) Uses an AI model to generate: Buy / Watch / Sell recommendation Entry price, stop-loss, and take-profit levels Investment thesis, pros & cons Fundamental quality score (1–10) Stores the final structured analysis back into Google Sheets This creates a repeatable, no-code stock analysis pipeline ready for decision-making or dashboards. Data source Market data is powered by EODHD APIs How to configure this workflow 1. Google Sheets (Input) Create a sheet with a column called: ticker (e.g. MSFT, AAPL, AMZN) Each row represents one stock to analyze. 2. EODHD APIs Create an EODHD account Get your API token Add it to the HTTP Request nodes as: api_token=YOUR_API_KEY EODHD APIs 3. AI Model Configure your AI provider (OpenAI / compatible model) The AI receives: Fundamentals Technical indicators Growth potential score It returns structured JSON with recommendations and trade levels 4. Google Sheets (Output) Results are appended to a Signals tab with: Signal (BUY / WATCH / SELL) Entry, Stop Loss, Take Profit Fundamental score (1–10) Investment thesis and risk notes
by Joseph
Reddit Lead Generation Automation (Batch Processing Version) Overview Automatically find potential customers on Reddit who are actively looking for solutions like your product. This workflow analyzes your product website, generates targeted keywords, searches Reddit for relevant conversations, and filters them using AI to give you only the most qualified leads. What This Workflow Does Analyzes Your Product - Takes your product URL and uses Firecrawl to understand what your product does and who it's for Generates Smart Keywords - Uses AI to create 10 targeted keyword phrases based on problems your product solves Searches Reddit - Finds 10 recent conversations for each keyword (100 total posts) Filters with AI - Scores each conversation 1-10 and keeps only genuine leads (score 7+) Outputs Clean Report - Delivers a formatted markdown report with all qualified leads, sorted by relevance Perfect For Finding your first customers Product validation and market research Community management and engagement B2B/B2C lead generation Content creators looking for audience feedback Anyone wanting to find relevant Reddit discussions at scale How to Use Set Up Credentials: Firecrawl API key Reddit OAuth2 API credentials AI provider (Gemini, OpenAI, or Claude) Activate the Workflow Trigger via form triggger node Get Results: The workflow returns a complete markdown report with: Total qualified leads found Conversation titles and content Subreddit links Engagement metrics (upvotes, comments) Lead scores and reasoning Direct links to posts Key Features ✅ 100% Automated - No manual keyword research or scrolling through Reddit ✅ AI-Powered Filtering - Only get conversations where people genuinely need your solution ✅ Comprehensive Data - See engagement metrics, post content, and direct links ✅ Customizable - Adjust filtering threshold, keyword count, posts per keyword ✅ Time-Saving - Processes 100 posts in ~2 minutes vs hours of manual work ✅ Smart Scoring - AI explains why each conversation is a good lead Requirements APIs/Services: n8n (self-hosted or cloud) Firecrawl API (500 free credits/month) Reddit Developer Account (free) AI provider: Gemini (recommended, generous free tier), OpenAI, or Claude Credentials to Set Up: Firecrawl API Key Reddit OAuth2 API Google Gemini / OpenAI / Anthropic Claude Customization Options Adjust Search Parameters: Change Reddit search timeframe (month/week/day) Modify number of posts per keyword (default: 10) Add/remove keywords (default: 10) Modify AI Filtering: Adjust score threshold (default: 7+) Customize filtering criteria in the prompt Change AI model for different quality/cost balance Schedule Automation: Add a Schedule Trigger node to run daily/weekly Automatically email results Store leads in a database Tips for Best Results Start with known products to test the workflow (e.g., notion.so, slack.com) Review generated keywords after first run and adjust the AI prompt if needed Lower score threshold to 6 if getting too few results Focus on problem-based keywords rather than product names Check multiple subreddits by analyzing where your leads appear Use Cases SaaS Founders: Find people asking for tools in your category Content Creators: Discover what your audience is discussing Market Researchers: Validate product ideas and pain points Community Managers: Monitor brand mentions and competitor discussions Sales Teams: Generate warm leads from genuine product inquiries Version Information This is the batch processing version - it runs completely within n8n and outputs all results at once. Perfect for: Manual trigger workflows Scheduled automation One-time research projects Learning and testing For a frontend-integrated version with progressive loading and real-time updates, check out my creator profile. Tags: reddit, lead generation, automation, AI filtering, web scraping, market research, sales automation, keyword research, firecrawl, gemini
by Konstantin
Name: AI Chatbot for Max Messenger with Voice Recognition (GigaChat + Sber) Description: How it works This workflow powers an intelligent, conversational AI bot for Max messenger that can understand and respond to both text and voice messages. The bot uses GigaChat AI with built-in memory, allowing it to remember the conversation history for each unique user and answer follow-up questions. Voice messages are transcribed using Sber SmartSpeech. It's a complete solution for creating an engaging, automated assistant within your Max bot, using Russian AI services. Step-by-step Max Trigger:* The workflow starts when the *Max Trigger** node receives a new message sent to your Max bot. Access Control:* The *Check User** node verifies the sender's user ID against an allowed list. This prevents unauthorized users from accessing your bot. Access Denied Response:* If the user is not authorized, the *Access Denied** node sends a polite rejection message. Message Type Routing:* The *Text/Attachment** (Switch) node checks if the message contains plain text or has attachments (voice, photo, file). Attachment Processing:* If an attachment is detected, the *Download Attachment* (HTTP Request) node retrieves it, and the *Attachment Router** (Switch) node determines its type (voice, photo, or file). Voice Transcription:* For voice messages, the workflow gets a Sber access token via *Get Access Token* (HTTP Request), merges it with the audio file, and sends it to *Get Response** (HTTP Request) which uses Sber SmartSpeech API to transcribe the audio to text. Input Unification:* The *Voice to Prompt* node converts transcribed text into a prompt, while *Text to Prompt* does the same for plain text messages. Both paths merge at the *Combine** node. AI Agent Processing:* The unified prompt is passed to the *AI Agent, powered by **GigaChat Model and using Simple Memory to retain the last 10 messages per user (using Max user_id as the session key). Response Delivery:* The AI-generated response is sent back to the user via the *Send Message** node. Set up steps Estimated set up time: 15 minutes Get Max bot credentials: Visit https://business.max.ru/ to create a bot and obtain API credentials. Add these credentials to Max Trigger, Send Message, and Access Denied nodes. Add GigaChat credentials: Register for GigaChat API access and add your credentials to the GigaChat Model node. Add Sber credentials: Obtain Sber SmartSpeech API credentials and add them to Get Access Token and Get Response nodes (HTTP Header Auth). Configure access control: Open the Check User node and change the user_id value (currently 50488534) to your own Max user ID. This ensures only you can use the bot during testing. Customize bot personality: Open the AI Agent node and edit the system message to change the bot's name, behavior, and add your own contact information or links. Test the bot: Activate the workflow and send a text or voice message to your Max bot to verify it responds correctly. Notes This workflow is specifically designed for Russian-speaking users and uses Russian AI services (GigaChat and Sber SmartSpeech) as alternatives to OpenAI. Make sure you have valid API access to both services before setting up this workflow.
by Avkash Kakdiya
How it works This workflow turns a single planning row in Google Sheets into a fully structured content engine. It generates weighted content pillars, builds a rule-based posting calendar, and then creates publish-ready social posts using AI. The workflow strictly controls format routing, CTA rules, and execution order. All outputs are written back to Google Sheets for easy review and execution. Step-by-step Step 1: Input capture & pillar generation** Google Sheets Trigger – Detects new or updated planning rows. Get row(s) in sheet – Fetches brand, platform, scheduling, and promotion inputs. Message a model – Calculates calendar metrics and generates platform-specific content pillars. Code in JavaScript – Validates AI output and enforces 100% weight distribution. Append row in sheet – Stores finalized content pillars in the pillars sheet. Step 2: Calendar generation & routing** Message a model7 – Generates a full day-by-day content calendar from the pillars. Code in JavaScript7 – Normalizes calendar data into a sheet-compatible structure. Append row in sheet6 – Saves calendar entries with dates, formats, CTAs, and status. Switch By Format – Routes items based on Video vs Non-Video formats. Step 3: Post creation & final storage** Loop Over Items – Processes each calendar entry one at a time. Message a model6 – Creates complete hooks, captions, CTAs, and hashtags. Code in JavaScript6 – Formats AI output for final storage. Append row in sheet7 – Stores publish-ready posts in the final sheet. Wait – Controls pacing to avoid API rate limits. Why use this? Eliminates manual content planning and ideation. Enforces strategic content mix and CTA discipline. Produces platform-ready posts automatically. Keeps all planning, calendars, and content in Google Sheets. Scales content operations without extra overhead.
by Guido X Jansen
AI Council: Multi-Model Consensus with Peer Review Inspired by Andrej Karpathy's LLM Council, but rebuilt in n8n. This workflow creates a "council" of AI models that independently answer your question, then peer-review each other's responses before a final arbiter synthesizes the best answer. Who is this for? If you want to prepare for an upcoming meeting with different people and prep for their different views find any "blind spots" in your view on a certain subject Researchers wanting more robust AI-generated answers Developers exploring multi-model architectures Anyone seeking higher-quality responses through AI consensus, potentially with faster/cheaper models. Teams evaluating different LLM capabilities side-by-side How it works Ask a Question — Submit your query via the Chat Trigger Individual Answers — Four different models (Gemini, Llama, Gemma, Mistral) independently generate responses Peer Review — Each model reviews ALL answers, identifying pros, cons, and overall assessment Final Synthesis — DeepSeek R1 analyzes all peer reviews and produces a refined, consensus-based final answer Setup Instructions Prerequisites Access to an LLM (e.g. OpenRouter account with API credits) Steps Create OpenRouter credentials in n8n: Go to Settings → Credentials → Add Credential Select "OpenRouter" and paste your API key Connect all model nodes to your OpenRouter credential. In this example I used Gemini, Llama, Gemma, Mistral and Deepseek, but you can use whatever you want. You can also use the same models, but change their parameters. Play around to find out what suits you best. Activate the workflow and open the Chat interface to test Customization Ideas You can add as many answer and review models as you want. Do note that each AI node is executed in series, so each will add to the total duration. Swap models via OpenRouter's model selector (e.g., use Claude, GPT-4, etc.) Adjust the peer review prompt to represent a certain persona or with domain-specific evaluation criteria Add memory nodes for multi-turn conversations Connect to Slack/Discord instead of the Chat Trigger
by Kevin Armbruster
Automatically add Travel time blockers before Appointments This bot automatically adds Travel time blockers to your calendar, so you never come late to an appointment again. How it works Trigger**: The workflow is initiated daily at 7 AM by a "Schedule Trigger". AI Agent**: An "AI Agent" node orchestrates the main logic. Fetch events**: It uses the get_calendar_events tool to retrieve all events scheduled for the current day. Identify events with location**: It then filters these events to identify those that have a specified location. Check for existing travel time Blockers*: For each event with a location, it checks if a Travel time blocker already exists. Events that *do not have such a blocker are marked for processing. Calculate travel time: Using the Google Directions API it determines how lot it takes to get to the location of the event. The starting location is by default your **Home Address, unless there is a previous event within 2 hours before the event, in which case it will use the location of that previous event. Create Travel time blocker**: Finally, it uses the create_calendar_event tool to create the Travel time blocker with a duration equal to the calculated travel time + 10 minutes for buffer. Set up steps Set Variables Home address Blocker name Mode of Transportation Connect your LLM Provider Connect your Google Calendar Connect your Google Directions API
by Denis
What this workflow does Complete Airtable database management system using MCP (Model Context Protocol) for AI agents. Create bases, tables with complex field types, manage records, and maintain state with Redis storage. Setup steps Add your Airtable Personal Access Token to credentials Configure Redis connection for ID storage Get your workspace ID from Airtable (starts with wsp...) Connect to MCP Server Trigger Configure your AI agent with the provided instructions Key features Create new Airtable bases and custom tables Support for all field types (date, number, select, etc.) Full CRUD operations on records Rename tables and fields Store base/workspace IDs to avoid repeated requests Generic operations work with ANY Airtable structure Included operations create_base, create_custom_table, add_field get_table_ids, get_existing_records update_record, rename_table, rename_fields delete_record get/set base_id and workspace_id (Redis storage) Notes Check sticky notes in workflow for ID locations and field type requirements.
by higashiyama
AI Team Morale Monitor Who’s it for For team leads, HR, and managers who want to monitor the emotional tone and morale of their teams based on message sentiment. How it works Trigger: Runs every Monday at 9 AM. Config: Defines your Teams and Slack channels. Fetch: Gathers messages for the week. AI Analysis: Evaluates tone and stress levels. Aggregate: Computes team sentiment averages. Report: Creates a readable morale summary. Slack Post: Sends report to your workspace. How to set up Connect Microsoft Teams and Slack credentials. Enter your Team and Channel IDs in the Workflow Configuration node. Adjust the schedule if desired. Requirements Microsoft Teams and Slack access. Gemini (or OpenAI) API credentials set in AI nodes. How to customize Modify the AI prompts for different insight depth. Replace Gemini with other LLMs if preferred. Change posting platform or format. Note: This workflow uses only linguistic data — no personal identifiers or private metadata.
by Rahul Joshi
📊 Description This workflow automatically creates a daily market intelligence brief for your stock portfolio. Instead of checking prices, news, and social media separately, it brings everything together into one clear update. On a scheduled basis, the workflow reads your stock list from Google Sheets and processes each stock individually. It fetches the latest stock price data, recent market news, and investor sentiment from public sources. All this information is then analyzed by AI to identify what truly matters, filtering out noise and repeated information. The AI generates a concise market summary that highlights overall sentiment, key drivers, risks, and one actionable insight for the day. The final result is sent directly to Slack as a clean, easy-to-read message, helping you stay informed without manual effort. This workflow is ideal for anyone who wants a clear daily view of market conditions without spending hours monitoring multiple platforms. ⚙️ What This Workflow Does Runs automatically on a daily schedule Reads stock symbols from Google Sheets Fetches latest stock price data Collects recent market news Gathers investor sentiment from public discussions Uses AI to summarize market-moving signals Sends one actionable daily brief to Slack ✅ Key Benefits Saves time by automating market monitoring Reduces noise and highlights what actually matters Combines prices, news, and sentiment in one place Provides clear daily insights instead of raw data Easy to adjust for different portfolios or schedules 🧩 Features Scheduled daily execution Portfolio-based stock tracking Market news collection via RSS Social sentiment analysis from Reddit AI-driven market intelligence summary Structured output for alerts or reporting Slack integration for daily delivery 🔐 Requirements To use this workflow, you will need: Alpha Vantage API key for stock price data OpenAI account for AI analysis Google Sheets access for portfolio input Slack account for message delivery n8n instance (cloud or self-hosted) 🎯 Target Audience Stock investors Portfolio managers Market analysts Finance teams Founders and operators tracking markets Automation builders in finance
by Guillaume Duvernay
Stop duplicating your work! This template demonstrates a powerful design pattern to handle multiple triggers (e.g., Form, Webhook, Sub-workflow) within a single, unified workflow. By using a "normalize and consolidate" technique, your core logic becomes independent of the trigger that started it, making your automations cleaner, more scalable, and far easier to maintain. Who is this for? n8n developers & architects:** Build robust, enterprise-grade workflows that are easy to maintain. Automation specialists:** Integrate the same core process with multiple external systems without repeating yourself. Anyone who values clean design:** Apply the DRY (Don't Repeat Yourself) principle to your automations. What problem does this solve? Reduces duplication:** Avoids creating near-identical workflows for each trigger source. Simplifies maintenance:** Update your core logic in one place, not across multiple workflows. Improves scalability:** Easily add new triggers without altering the core processing logic. Enhances readability:** A clear separation of data intake from core logic makes workflows easier to understand. How it works (The "Normalize & Consolidate" Pattern) Trigger: The workflow starts from one of several possible entry points, each with a unique data structure. Normalize: Each trigger path immediately flows into a dedicated Set node. This node acts as an adapter, reformatting the unique data into a standardized schema with consistent key names (e.g., mapping body.feedback to feedback). Consolidate: All "normalize" nodes connect to a single Set node. This node uses the generic {{ $json.key_name }} expression to accept the standardized data from any branch. From here, the workflow is a single, unified path. Setup This template is a blueprint. To adapt it: Replace the triggers with your own. Normalize your data: After each trigger, use a Set node to map its unique output to your common schema. Connect to the consolidator: Link all your "normalize" nodes to the Consolidate trigger data node. Build your core logic after the consolidation point, referencing the unified data. Taking it further Merge any branches:** Use this pattern to merge any parallel branches in a workflow, not just triggers. Create robust error handling:** Unify "success" and "error" paths before a final notification step to report on the outcome.