by Pawan
Who's it for? This template is perfect for educational institutions, coaching centers (like UPSC, GMAT, or specialized technical training), internal corporate knowledge bases, and SaaS companies that need to provide instant, accurate, and source-grounded answers based on proprietary documents. It's designed for users who want to leverage Google Gemini's powerful reasoning but ensure its answers are strictly factual and based only on their verified knowledge repository. How it works / What it does This workflow establishes a Retrieval-Augmented Generation (RAG) pipeline to build a secure, fact-based AI Agent. It operates in two main phases: 1. Knowledge Ingestion: When a new document (e.g., a PDF, lecture notes, or policy manual) is uploaded via a form or Google Drive, the Embeddings Google Gemini node converts the content into numerical vectors. These vectors are then stored in a secure MongoDB Atlas Vector Store, creating a private knowledge base. 2. AI Query & Response: A user asks a question via Telegram. The AI Agent uses the question to perform a semantic search on the MongoDB Vector Store, retrieving the most relevant, source-specific passages. It then feeds this retrieved context to the Google Gemini Chat Model to generate a precise, factual answer, which is sent back to the user on Telegram. This process ensures the agent never "hallucinates" or uses general internet knowledge, making the responses accurate and trustworthy. Requirements To use this template, you will need the following accounts and credentials: n8n Account Google Gemini API Key: For generating vector embeddings and powering the AI Agent. MongoDB Atlas Cluster: A free-tier cluster is sufficient, configured with a Vector Search index. Telegram Bot: A bot created via BotFather and a Chat ID where the bot will listen for and send messages. Google Drive Credentials (if using the Google Drive ingestion path). How to set up Set up MongoDB Atlas:** Create a free cluster and a database. Create a Vector Search Index on your collection to enable efficient searching. Configure Ingestion Path:** Set up the Webhook trigger for your "On form submission" or connect your Google Drive credentials. Configure the Embeddings Google Gemini node with your API Key. Connect the MongoDB Atlas Vector Store node with your database credentials, collection name, and index name. Configure Chat Path:** Set up the Telegram Trigger with your Bot Token to listen for incoming messages. Configure the Google Gemini Chat Model with your API Key. Connect the MongoDB Atlas Vector Store 1 node as a Tool within the AI Agent. Ensure it points to the same vector store as the ingestion path. Final Step:* Configure the Send a text message node with your *Telegram Bot Token and the Chat ID**. How to customize the workflow Change Knowledge Source:** Replace the Google Drive nodes with nodes for Notion, SharePoint, Zendesk, or another document source. Change Chat Platform:** Replace the Telegram nodes with a Slack, Discord, or WhatsApp Cloud trigger and response node. Refine the Agent's Persona:** Open the AI Agent node and edit the System Instruction to give the bot a specific role (e.g., "You are a senior UPSC coach. Answer questions politely and cite sources."). đĄ Example Use Case An UPSC/JEE/NEET coaching uploads NCERT summaries and previous year notes to Google Drive. Students ask questions in the Telegram group â the bot instantly replies with contextually accurate answers from the uploaded materials. The same agent can generate daily quizzes or concise notes from this curated content automatically.
by Peliqan
How it works This template is an end-to-end demo of a chatbot using business data from multiple sources (e.g. Notion, Chargebee, Hubspot etc.) with RAG + SQL. Peliqan.io is used as a "cache" of all business data. Peliqan uses one-click ELT to sync all your business data to its built-in data warehouse, allowing for fast & accurate RAG and "Text to SQL" queries. The workflow will write source data to Supabase as a vector store, for RAG searches by the chatbot. The source URL (e.g. the URL of a Notion page) is added in metadata. The AI Agent will decide for each question to use either RAG or Text-to-SQL or a combination of both. Text-to-SQL is performed via the Peliqan node, added as a tool to the AI Agent. The question of the user in natural language is converted to an SQL query by the AI Agent. The query is executed by Peliqan.io on the source data and the result is interpreted by the AI Agent. RAG is typically used to answer knowledge questions, often on non-structured data (Notion pages, Google Drive etc.). Text-to-SQL is typically used to answer analytical questions, for example "Show list of customers with number of open support tickets and add customer revenue based on invoiced amounts". Preconditions You signed up for a Peliqan.io free trial account You have one or more data sources, e.g. a CRM, ERP, Accounting software, files, Notion, Google Drive etc. Set up steps Sign up for a free trial on peliqan.io: https://peliqan.io Add one or more sources in Peliqan (e.g. Hubspot, Pipedrive...) Copy your Peliqan API key under settings and use it here to add a Peliqan connection Run the "RAG" workflow to feed Supabase, change the name of the table in the Peliqan node "Get table data". Update the list of tables & columns that can be used for SQL in the System Message of the AI Agent. Visit https://peliqan.io/n8n for more information. Disclaimer: This template contains a community node and therefore only works for n8n self-hosted users.
by moosa
Daily Tech & Startup Digest: Notion-Powered News Curation Description This n8n workflow automates the curation of a daily tech and startup news digest from articles stored in a Notion database. It filters articles from the past 24 hours, refines them using keyword matching and LLM classification, aggregates them into a single Markdown digest with categorized summaries, and publishes the result as a Notion page. Designed for manual testing or daily scheduled runs, it includes sticky notes (as required by the n8n creator page) to document each step clearly. This original workflow is for educational purposes, showcasing Notion integration, AI classification, and Markdown-to-Notion conversion. Data in Notion Workflow Overview Triggers Manual Trigger**: Tests the workflow (When clicking âExecute workflowâ). Schedule Trigger**: Runs daily at 8 PM (Schedule Trigger, disabled by default). Article Filtering Fetch Articles**: Queries the Notion database (Get many database pages) for articles from the last 24 hours using a date filter. Keyword Filtering**: JavaScript code (Code in JavaScript) filters articles containing tech/startup keywords (e.g., "tech," "AI," "startup") in title, summary, or full text. LLM Classification**: Uses OpenAIâs gpt-4.1-mini (OpenAI Chat Model) with a text classifier (Text Classifier) to categorize articles as "Tech/Startup" or "Other," keeping only relevant ones. Digest Creation Aggregate Articles**: Combines filtered articles into a single object (Code in JavaScript1) for processing. Generate Digest**: An AI agent (AI Agent) with OpenAIâs gpt-4.1-mini (OpenAI Chat Model1) creates a Markdown digest with an intro paragraph, categorized article summaries (e.g., AI & Developer Tools, Startups & Funding), clickable links, and a closing note. Notion Publishing Format for Notion**: JavaScript code (Code in JavaScript2) converts the Markdown digest into a Notion-compatible JSON payload, supporting headings, bulleted lists, and links, with a title like âTech & Startup Daily Digest â YYYY-MM-DDâ. Create Notion Page**: Sends the payload via HTTP request (HTTP Request) to the Notion API to create a new page. Credentials Uses Notion API and OpenAI API credentials. Notes This workflow is for educational purposes, demonstrating Notion database querying, AI classification, and Markdown-to-Notion publishing. Enable and adjust the schedule trigger (e.g., 8 PM daily) for production use to create daily digests. Set up Notion and OpenAI API credentials in n8n before running. The date filter can be modified (e.g., hours instead of days) to adjust the article selection window.
by Interlock GTM
Summary Turns a plain name + email into a fully-enriched HubSpot contact by matching the person in Apollo, pulling their latest LinkedIn activity, summarising the findings with GPT-4o, and upserting the clean data into HubSpot Key use-cases SDRs enriching inbound demo requests before routing RevOps teams keeping executive records fresh Marketers building highly-segmented email audiences Inputs |Field |Type| Example| |-|-|-| name |string| âJane Doeâ email| string |âjane@acme.comâ Required credentials |Service |Node |Notes| |-|-|-| Apollo.io API key | HTTP Request â âEnrich with Apolloâ |Set in header x-api-key RapidAPI key| (Fresh-LinkedIn-Profile-Data) âGet recent postsâ| Header x-rapidapi-key OpenAI 3 LangChain nodes| Supply an API key| default model gpt-4o-mini HubSpot OAuth2| âEnrich in HubSpotâ| Add/create any custom contact properties referenced High-level flow Trigger â Runs when another workflow passes name & email. Clean â JS Code node normalises & deduplicates emails. Apollo match â Queries /people/match; skips if no person. LinkedIn fetch â Grabs up to 3 original posts from last 30 days. AI summary chain OpenAI â Structured/Auto-fixing parsers Produces a strict JSON block with job title, location, summaries, etc. HubSpot upsert â Maps every key (plus five custom properties) into the contact record. Sticky-notes annotate the canvas; error-prone bits have retry logic.
by Vadim
What it does This workflow is an AI agent in the form of a Telegram bot. Its main purpose is to capture contact information and store it in a CRM. The agent supports multi-modal inputs and can extract contact details from text messages, voice recordings, and images (like photos of business cards). The bot guides the user through data collection via a natural conversation, asks clarifying questions for missing information, and summarizes the extracted data for confirmation before saving. It also checks for duplicate contacts by email and gives users the choice to either create a new contact or update an existing one. For simplicity, this example uses a Google Sheets document to store collected contacts. It can easily be replaced by a real CRM like HubSpot, Pipedrive, Monday, etc. How to use the bot Send contact details via text or voice, or upload a photo of a business card. The bot will show the extracted information and ask questions when needed. Once the bot confirms saving of the current contact, you can send the next one. Use the /new command at any moment to discard the previous conversation and start from scratch. Requirements A Telegram bot Access Token Google Gemini API key Google Sheets credentials How to set up Create a new Telegram bot (see n8n docs and Telegram bot API docs for details) Take webhook URL from the Telegram Trigger node (WEBHOOK_URL) and your bot's access token (TOKEN) and run curl -X POST "https://api.telegram.org/bot{TOKEN}/setWebhook?url={WEBHOOK_URL}" Create a new Google Sheets document with "Full name", "Email", "Phone", "Company", "Job title" and "Meeting notes" columns Configure parameters in the parameters node: Set ID of the Google Sheets document Set sheet name ("Sheet1" by default) Configure Google Sheets credentials for AI Agent's tools: Search for contact and Create new contact and Update existing contact. Add Google Gemini API key for the models ("AI Agent", "Transcribe audio", "Analyze image" nodes)
by AppUnits AI
Generate Invoices for Customers with Jotform, Xero and Slack This workflow automates the entire process of receiving a product/service order, checking or creating a customer in Xero, generating an invoice, emailing it, and notifying the sales team for example (via Slack) â all triggered by a form submission (via Jotform). How It Works Receive Submission Triggered when a user submits a form. Collects data like customer details, selected product/service, etc. Check If Customer Exists Searches Xero to determine if the customer already exists. â If Customer Exists: Update customer details. â If Customer Doesnât Exist: Create a new customer in Xero. Create The Invoice Generates a new invoice for the customer using the item selected. Send The Invoice Automatically sends the invoice via email to the customer. Notify The Team Notifies the sales team for example via Slack about the new invoice. Who Can Benefit from This Workflow? Freelancers** Service Providers** Consultants & Coaches** Small Businesses** E-commerce or Custom Product Sellers** Requirements Jotform webhook setup, more info here Xero credentials, more info here Make sure that products/services values in Jotform are exactly the same as your item Code in your Xero account Email setup, update email node (Send email) LLM model credentials Slack credentials, more info here
by Basil Irfan
Transform YouTube Videos to Social Media Content with Vizard AI and GPTâ4.1 Overview This n8n template fetches new YouTube videos, enriches them with Vizard AI metadata, generates socialâmedia captions using GPTâ4.1, logs everything to Google Sheets, and notifies you by email. Itâs a turnkey solution for content creators and marketers who need an endâtoâend automated pipeline from video publishing to post scheduling. Setup Instructions Import the Template In n8n, click Import from JSON, paste this workflow, and save. Configure Credentials Vizard AI: Create an HTTP Request credential named Vizard API and set your VIZARDAI_API_KEY. OpenAI: Add a new OpenAI credential for GPTâ4.1. Google Sheets: Create a Google Sheets OAuth2 credential with read/write access or just sign in if your on cloud hosting Gmail: Add a Gmail OAuth2 credential for email notifications or just sign in if you are on cloud hosting Adjust Limits In the Limit Videos node, set maxItems to control batch size. Google Sheets Column Structure | Column | Description | | ------------------ | ---------------------------------------------------- | | videoId | Unique YouTube video identifier | | projectId | Vizard AI project ID returned | | videoUrl | Original YouTube video URL | | title | Video title | | transcript | Transcribed text from Vizard AI | | viralScore | Vizard AIâs viralâscore metric | | viralReason | Explanation for viral score | | generatedCaption | GPTâ4.1âgenerated caption in JSON { "caption": ""} | | clipEditorUrl | URL to Vizardâs clip editor | Workflow Steps Read YouTube RSS Feed (Read YouTube RSS Feed) Limit Videos (Limit Videos to N) Send to Vizard (Create Vizard Project & Retrieve Vizard Metadata) Split Items for Processing (Iterate Each Video) Generate Captions (Generate Social Media Captions) Append Row in Sheet (Log to Google Sheets) Send Notification (Email Summary) Customization Tips Alternate Caption Styles**: Modify the AI prompt for tone, length, or brand voice. Localization**: Extend prompts for other languages. Notification Channels**: Swap Gmail for Slack, Teams, or SMS via webhook nodes.
by Automate With Marc
## Podcast on Autopilot â Generate Podcast Ideas, Scripts & Audio Automatically with Eleven Labs, GPT-5 and Claude Sonnet 4.0 Bring your solo podcast to life â on full autopilot. This workflow uses GPT-5 and Claude Sonnet to turn a single topic input into a complete podcast episode intro and ready-to-send audio file. How it works Start a chat trigger â enter a seed idea or topic (e.g., âhabits,â âfailure,â âtechnology and purposeâ). Podcast Idea Agent (GPT-5) instantly crafts a thought-provoking, Rogan- or Bartlett-style episode concept with a clear angle and takeaway. Podcast Script Agent (Claude 4.0 Sonnet) expands that idea into a natural, engaging 60-second opening monologue ready for recording. Text-to-Speech via ElevenLabs automatically converts the script into a high-quality voice track. Email automation sends the finished MP3 directly to your inbox. Perfect for âą Solo creators who want to ideate, script and voice short podcasts effortlessly âą Content teams prototyping daily or weekly audio snippets âą Anyone testing AI-driven storytelling pipelines Customization tips âą Swap ElevenLabs with your preferred TTS service by editing the HTTP Request node. âą Adjust prompt styles for tone or audience in the Idea and Script Agents. âą Modify the Gmail (or other mail service) node to send audio to any destination (Drive, Slack, Notion, etc.). âą For reuse at scale, add variables for episode number, guest name, or theme category â just clone and update the trigger node. Watch step-by-step tutorial (how to build it yourself) https://www.youtube.com/watch?v=Dan3_W1JoqU Requirements & disclaimer âą Requires API keys for OpenAI + Anthropic + ElevenLabs (or your chosen TTS). âą Youâre responsible for managing costs incurred through AI or TTS usage. âą Avoid sharing sensitive or private data as input into prompt flows. âą Designed with modularity so you can turn off or swap/deep-link any stage (idea â script â voice â email) without breaking the chain.
by Rahul Joshi
đ Description Streamline Facebook Messenger inbox management with an AI-powered categorization and response system. đŹâïž This workflow automatically classifies new messages as Lead, Query, or Spam using GPT-4, routes them for approval via Slack, responds on Facebook once approved, and logs all interactions into Google Sheets for tracking. Perfect for support and marketing teams managing high volumes of inbound DMs. đđ What This Template Does 1ïžâŁ Trigger â Runs hourly to fetch new Facebook Page messages. â° 2ïžâŁ Extract & Format â Collects sender info, timestamps, and message content for analysis. đ 3ïžâŁ AI Categorization â Uses GPT-4 to identify message type (Lead, Query, Spam) and suggest replies. đ§ 4ïžâŁ Slack Approval Flow â Sends categorized leads and queries to Slack for quick team approval. đŹ 5ïžâŁ Facebook Response â Posts AI-suggested replies back to the original sender once approved. đ 6ïžâŁ Data Logging â Records every message, reply, and approval status into Google Sheets for analytics. đ 7ïžâŁ Error Handling â Automatically alerts via Slack if the workflow encounters an error. đš Key Benefits â Reduces manual message triage on Facebook Messenger â Ensures consistent and professional customer replies â Provides full visibility via Google Sheets logs â Centralizes team approvals in Slack for faster response times â Leverages GPT-4 for accurate categorization and natural replies Features Hourly Facebook message fetch with Graph API GPT-4 powered text classification and reply suggestion Slack-based dual approval flow Automated Facebook replies post-approval Google Sheets logging for all categorized messages Built-in error detection and Slack alerting Requirements Facebook Graph API credentials with page message permissions OpenAI API key for GPT-4 processing Slack API credentials with chat:write permission Google Sheets OAuth2 credentials Environment Variables: FACEBOOK_PAGE_ID GOOGLE_SHEET_ID GOOGLE_SHEET_NAME SLACK_CHANNEL_ID Target Audience Marketing and lead-generation teams using Facebook Pages đŁ Customer support teams managing Messenger queries đŹ Businesses seeking automated lead routing and CRM sync đ§Ÿ Teams leveraging AI for customer engagement optimization đ€ Step-by-Step Setup Instructions 1ïžâŁ Connect Facebook Graph API credentials and set your page ID. 2ïžâŁ Add OpenAI API credentials for GPT-4. 3ïžâŁ Configure Slack channel ID and credentials. 4ïžâŁ Link your Google Sheet for message logging. 5ïžâŁ Replace environment variable placeholders with your actual IDs. 6ïžâŁ Test the workflow manually before enabling automation. 7ïžâŁ Activate the schedule trigger for ongoing hourly execution. â
by Raphael De Carvalho Florencio
What this workflow is (About) This workflow turns a Telegram bot into an AI-powered lyrics assistant. Users send a command plus a lyrics URL, and the flow downloads, cleans, and analyzes the text, then replies on Telegram with translated lyrics, summaries, vocabulary, poetic devices, or an interpretationâall generated by AI (OpenAI). What problems it solves Centralizes lyrics retrieval + cleanup + AI analysis in one automated flow Produces study-ready outputs (translation, vocabulary, figures of speech) Saves time for teachers, learners, and music enthusiasts with instant results in chat Key features AI analysis** using OpenAI (no secrets hardcoded; uses n8n Credentials) Line-by-line translation, **concise summaries, vocabulary lists Poetic/literary device detection* and *emotional/symbolic interpretation** Robust ETL (extract, download, sanitize) and error handling Clear Sticky Notes documenting routing, ETL, AI prompts, and messaging Who itâs for Language learners & teachers Musicians, lyricists, and music bloggers Anyone studying lyrics for meaning, style, or vocabulary Input & output Input:* Telegram command with a public *lyrics URL** Output:** Telegram messages (Markdown/MarkdownV2), split into chunks if long How it works Telegram â Webhook** receives a user message (e.g., /get_lyrics <URL>). Routing (If/Switch)** detects which command was sent. Extract URL + Download (HTTP Request)** fetches the lyrics page. Cleanup (Code)** strips HTML/scripts/styles and normalizes whitespace. OpenAI (Chat)** formats the result per command (translation, summary, vocabulary, analysis). Telegram (Send Message)** returns the final text; long outputs are split into chunks. Error handling** replies with friendly guidance for unsupported/incomplete commands. Set up steps Create a Telegram bot with @BotFather and copy the bot token. In n8n, create Credentials â Telegram API and paste your token (no hardcoded keys in nodes). Create Credentials â OpenAI and paste your API key. Import the workflow and set a short webhook path (e.g., /lyrics-bot). Publish the webhook and set it on Telegram: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://[YOUR_DOMAIN]/webhook/lyrics-bot (Optional) Restrict update types: curl -X POST https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook \ -H "Content-Type: application/json" \ -d '{ "url": "https://[YOUR_DOMAIN]/webhook/lyrics-bot", "allowed_updates": ["message"] }' Test by sending /start and then /get_lyrics <PUBLIC_URL> to your bot. If messages are long, ensure MarkdownV2 is used and special characters are escaped.
by Toshiki Hirao
You can turn messy business card photos into organized contact data automatically. With this workflow, you can upload a business card photo to Slack and instantly capture the contact details into Google Sheets using OCR. No more manual typingâeach new card is scanned, structured, saved, and confirmed back in Slack, making contact management fast and effortless. How it works Slack Trigger â The workflow starts when a business card photo is uploaded to Slack. HTTP Request â The uploaded image is fetched from Slack. AI/OCR Parsing â The card image is analyzed by an AI model and structured into contact fields (name, company, email, phone, etc.). Transform Data â The extracted data is cleaned and mapped into the correct format. Google Sheets â A new row is appended to your designated Google Sheet, creating an organized contact database. Slack Notification â Finally, a confirmation message is sent back to Slack to let you know the contact has been successfully saved. How to use Copy the template into your n8n instance. Connect your Slack account to capture uploaded images. Set up your Google Sheets connection and choose the spreadsheet where contacts should be stored. Adjust the Contact Information extraction node if you want to capture custom fields (e.g., job title, address). Deploy and test: upload a business card image in Slack and confirm itâs added to Google Sheets automatically. Requirements n8n running (cloud). A Slack account with access to the channel where photos will be uploaded. A Google Sheets account with a target sheet prepared for storing contacts. AI/OCR capability enabled in your n8n (e.g., OpenAI, Google Vision, or another OCR/LLM provider). Basic access rights in both Slack and Google Sheets to read and write data.
by Jainik Sheth
What is this? This RAG workflow allows you to build a smart chat assistant that can answer user questions based on any collection of documents you provide. It automatically imports and processes files from Google Drive, stores their content in a searchable vector database, and retrieves the most relevant information to generate accurate, context-driven responses. The workflow manages chat sessions and keeps the document database current, making it adaptable for use cases like customer support, internal knowledge bases, or HR assistant etc. How it works 1. Chat RAG Agent Uses OpenAI for responses, referencing only specific data from the vector store (data that is uploaded on google drive folder). Maintains chat history in Postgres using a session key from the chat input. 2. Data Pipeline (File Ingestion) Monitors Google Drive for new/updated files and automatically updates them in vector store Downloads, extracts, and processes file content (PDFs, Google Docs). Generates embeddings and stores them in the Supabase vector store for retrieval. 3. Vector Store Cleanup Scheduled and manual routines to remove duplicate or outdated entries from the Supabase vector store. Ensures only the latest and unique documents are available for retrieval. 4. File Management Handles folder and file creation, upload, and metadata assignment in Google Drive. Ensures files are organized and linked with their corresponding vector store entries. Getting Started Create and connect all relevant credentials Google Drive Postgres Supabase OpenAI Run the table creation nodes first to set up your database tables in Postgres Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Postgres) Start asking questions that leverage the agent's multiple reasoning approaches Customization (optional) This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases Note, if you're using a different nodes eg. file storage, vector store etc the integration may vary a little Prerequisites Google account (google drive) Supabase account OpenAI APIs Postgres account