by Udit Rawat
This n8n automation is designed to extract, process, and store content from Notion pages into a Pinecone vector store. Here's a breakdown of the workflow: Notion - Page Added Trigger: The automation starts by monitoring for newly added pages in a specific Notion database. It triggers whenever a new page is created, capturing the page's metadata. Notion - Retrieve Page Content: Once triggered, the automation fetches the full content of the newly added Notion page, including blocks like text, images, and videos. Filter Non-Text Content: The next step filters out non-text content (such as images and videos), ensuring only textual content is processed. Summarize - Concatenate Notion's blocks content: The remaining text content is concatenated into a single block of text for easier processing. Token Splitter: The concatenated text is then split into manageable tokens, which are chunks of text that can be used for embedding. Create metadata and load content: Metadata such as the page ID, creation time, and title are added to the content, making it easy to reference and track. Embeddings Google Gemini: The processed text is passed through a Google Gemini model to generate embeddings, which are numerical representations of the text that capture its semantic meaning. Pinecone Vector Store: Finally, the embeddings, along with the content and metadata, are stored in a Pinecone vector store, making it searchable and ready for use in applications like document retrieval or natural language processing tasks. This workflow ensures that every new page added to the Notion database is processed into a format that can be easily searched and used in machine learning applications. The automation runs every minute to capture new data in real-time, providing an up-to-date and searchable vector database of Notion content. Use Case: This automation converts Notion pages into vector embeddings and stores them in Pinecone for enhanced search and AI-driven insights. Itโs ideal for teams using Notion for knowledge management, enabling semantic search and context-based content retrieval. For example, employees can easily find relevant information across documents, and data scientists can use AI models to analyze and summarize the content stored in Notion.
by Trung Tran
๐ค Smart Interview Assistant: Tailored Questions Based on CV, JD, and Round Watch the demo video below: ๐ Whoโs it for This workflow is designed for: Recruiters* and *Talent Acquisition Specialists** who want to automate candidate interview prep. Hiring Managers** conducting multiple interviews and needing personalized question sets. Technical Interviewers** who want to save time and be well-prepared with relevant questions. โ๏ธ How it works / What it does The Smart Interview Assistant automates the interview preparation process in a few clicks: Accepts: Multiple resumes (PDFs) Selected job role Chosen interview round Extracts structured data from: The candidateโs CV The corresponding Job Description (JD) Uses GPT-4 to analyze: Candidate profile Role requirements Interview round context Generates: Tailored interview questions Expected answers A summarized interview prep report Sends the report directly to the hiring team via email (SMTP) ๐ Google Drive Structure ๐ Root Folder โโโ ๐ jd/ # Stores all job descriptions in PDF format โ โโโ Backend_Engineer.pdf โ โโโ Azure_DevOps_Lead.pdf โ โโโ ... โโโ ๐ Positions (Google Sheet) # Maps Job Role โ JD File Link ๐ Sample Mapping Sheet: Positions Sheet Columns: Job Role Job Description File URL (pointing to PDF in jd/ folder) ๐ ๏ธ How to Set Up Step 1: Configure API Integrations โ Connect your OpenAI GPT-4 API Key โ Enable Google Cloud APIs: Google Sheets API (to read job roles) Google Drive API (to access CV and JD files) โ Set up SMTP credentials (for email delivery) Step 2: Prepare Google Drive & Mapping Sheet Create a root folder on Google Drive Inside the root folder: Create a folder named /jd/ and upload all job descriptions (PDFs) Create a Google Sheet named Positions with the following format: | Job Role | Job Description File URL | |-----------------------------|--------------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Step 3: Build the Application Form Use any form tool (e.g., Typeform, Tally, or custom HTML) that collects: ๐ Resume file (PDF) ๐งพ Job Role (dropdown) ๐ Interview Round (dropdown) Step 4: Resume & JD Extraction ๐ Use Extract from PDF to parse the resume content ๐ Retrieve the JD link from the Positions sheet based on the selected Job Role ๐ Use Download file to pull the PDF for processing Step 5: Analyze with GPT-4 Run both Resume and JD through a Profile Analyzer Agent (GPT-4 with JSON output) Merge results Add manual input or mapping for the Interview Round metadata Step 6: Generate Interview Report Use a second GPT-4 agent (e.g., HR Expert Agent) to: Generate 6โ8 tailored interview questions Include expected answers and rationale Step 7: Deliver Final Report Format the content as: ๐ PDF (optional) ๐จ Email body Send the report to the recruiter, hiring manager, or interviewer via SMTP โ Requirements ๐ OpenAI GPT-4 API Key ๐ Google Drive (for resume and JD storage) ๐ Google Sheet (job role mapping) ๐ฌ SMTP credentials (host, username, password) ๐งฐ n8n self-hosted or cloud instance with: PDF Parser Google Sheets node HTTP Download node Email node โ๏ธ How to Customize the Workflow | Part | Customization Options | |----------------------------|-------------------------------------------------------------| | Form UI | Modify the design, dropdown options, or input validations | | Job Description Source | Replace Google Sheet with Notion, Airtable, or database | | Interview Metadata | Add job level, region, or language preference | | AI Prompt Tuning | Adjust prompt phrasing or temperature in GPT nodes | | Report Format | Generate PDF instead of email body using PDF node | | Delivery Method | Add internal HR portal webhook or generate downloadable link |
by Zacharia Kimotho
This is an example of how we can build a slack bot in a few easy steps Before you can start, you need to o a few things Create a copy of this workflow Create a slack bot Create a slash command on slack and paste the webhook url to the slack command Note Make sure to configure this webhook using a https:// wrapper and don't use the default http://localhost:5678 as that will not be recognized by your slack webhook. Once the data has been sent to your webhook, the next step will be passing it via an AI Agent to process data based on the queries we pass to our agent. To have some sort of a memory, be sure to set the slack token to the memory node. This way you can refer to other chats from the history. The final message is relayed back to slack as a new message. Since we can not wait longer than 3000 ms for slack response, we will create a new message with reference to the input we passed. We can advance this using the tools or data sources for it to be more custom tailored for your company. Usage To use the slackbot, go to slack and click on your set slash command eg /Bob and send your desired message. This will send the message to your endpoint and get return the processed results as the message. If you would like help setting this up, feel free to reach out to zacharia@effibotics.com
by Agent Circle
This workflow demonstrates how to automate live information gathering, fact-checking, and trend analysis in response to any chat message - using a powerful AI agent, memory, and a real-time search tool. Use cases are many: This is perfect for researchers needing instant, up-to-date data; support teams providing live, accurate answers; content creators looking to verify facts or find hot topics; and analysts automating regular reports with the freshest information. How It Works The workflow is triggered whenever a chat message is received (e.g., a user question, research prompt, or data request). The message is sent to the AI Agent, which follows the following steps: First, it queries SerpAPI โ Research to gather the latest real-time information and data from the web. Next, it checks the Window Buffer Memory for any related past interactions or contextual information that may be useful. Finally, it sends all collected data and context to the Google Gemini Chat Model, which analyzes the information and generates a comprehensive, intelligent response. Then, the AI Agent delivers the analyzed, up-to-date answer directly in the chat, combining live data, context, and expert analysis. How To Set Up Download and import the workflow into your n8n workspace. Set up API credentials and tool access for the AI Agent: Google Gemini (for chat-based intelligence) โ connected to Node Google Gemini Chat Model. SerpAPI (for real-time web and search results) โ connected to Node SerpAPI - Research. Window Buffer Memory (for richer, context-aware conversations) โ connected to Node Window Buffer Memory. Open the chat in n8n and type the topic or trend you want to research. Send the message and wait for the process to complete. Receive the AI-powered research reply in the chat box. Requirements An n8n instance (self-hosted or cloud). SerpAPI** credentials for live web search and data gathering. Window Buffer Memory** configured to provide relevant conversation context in history. Google Gemini API** access to analyze collected data and generate responses. How To Customize Choose your preferred AI model: Replace **Google Gemini with OpenAI ChatGPT, or any other chat model as preferred. Add or change memory: Replace **Window Buffer Memory with more advanced memory options for deeper recall. Connect your preferred chat platform**: Easily swap out the default chat integration for Telegram, Slack, or any other compatible messaging platform to trigger and interact with the workflow. Need Help? If youโd like this workflow customized, or if youโre looking to build a tailored AI Agent for your own business - please feel free to reach out to Agent Circle. Weโre always here to support and help you to bring automation ideas to life. Join our community on different platforms for assistance, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
by Sateesh
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. AI-Powered LinkedIn Publishing via Telegram Workflow Transform your LinkedIn presence with this intelligent n8n workflow that converts simple Telegram messages into professional LinkedIn posts through AI-powered content generation and approval workflows. ๐ฏ Who Is This For? Content Creators & Influencers** seeking to maintain consistent LinkedIn presence Marketing Professionals** managing multiple client accounts Business Owners** wanting to automate thought leadership content Social Media Managers** streamlining content workflows Entrepreneurs** maximizing content efficiency while maintaining quality ๐ Benefits Time Efficiency**: Reduces content creation time by 80-90% Quality Consistency**: Maintains professional standards across all posts Content Diversity**: Leverages multiple sources for rich, varied content Real-Time Relevance**: Incorporates latest industry trends and news Approval Control**: Human oversight ensures brand alignment Scalability**: Handles multiple users and high-volume content creation ๐ง Core Features Smart Content Classification Multi-Input Processing**: Handles URLs, topics, direct content, or combinations Intelligent Routing**: Automatically determines whether to scrape, search, or generate directly Context Preservation**: Maintains original user intent throughout the process Advanced Content Gathering Web Scraping**: Firecrawl integration for extracting article content from URLs Real-Time Search**: Brave Search API for latest industry trends and news Content Synthesis**: Merges multiple sources into coherent, valuable insights AI-Powered Content Generation Google Gemini Integration**: Creates professional, LinkedIn-optimized posts Platform-Specific Formatting**: Mobile-friendly paragraphs, engaging hooks, strategic CTAs SEO Optimization**: Relevant hashtags and keyword integration Character Management**: Ensures posts stay within LinkedIn's 2800 character limit Interactive Approval System Telegram Preview**: Rich preview with post analytics and formatting Action Buttons**: Approve, Edit, or Reject with single-click convenience Edit Workflow**: AI-powered rewriting based on user feedback Real-Time Updates**: Instant feedback and status notifications Comprehensive Content Tracking Google Sheets Integration**: Complete audit trail of all posts and content metrics Content Analytics**: Character counts, hashtag usage, source attribution User Authorization**: Secure access control with authorized user validation Post Management**: Unique ID generation for tracking and reference ๐ How It Works Message Reception: Secure Telegram trigger with user validation Content Classification: AI analyzes input type and extracts actionable elements Dynamic Routing: Intelligent branching based on content requirements: URL Path: Web scraping โ content extraction โ processing Topic Path: Web search โ latest information gathering โ synthesis Direct Path: Immediate processing for ready-to-post content Content Synthesis: Merges all gathered information into comprehensive context AI Generation: Creates LinkedIn-optimized post with professional formatting Interactive Approval: Telegram preview with approval workflow Publishing: Direct LinkedIn posting upon approval Content Logging: Complete tracking in Google Sheets ๐ Use Cases Daily Industry Updates: Transform news URLs into thought leadership posts Content Repurposing: Convert articles and research into LinkedIn insights Trend Commentary: Generate posts about trending topics with real-time data Educational Content: Create informative posts from technical documentation Personal Branding: Maintain consistent professional presence with minimal effort ๐ ๏ธ Technical Requirements Required Community Nodes Install these community nodes in your n8n instance: Brave Search Integration @brave/n8n-nodes-brave-search Firecrawl Web Scraping @mendable/n8n-nodes-firecrawl LangChain AI Integration @n8n/n8n-nodes-langchain APIs & Services Required Google Gemini (Content generation and classification) Firecrawl API (Web scraping) Brave Search API (Real-time search) Telegram Bot API (Interface and notifications) LinkedIn API (Content publishing) Google Sheets API (Content tracking and logging) ๐ Setup Guide 1. Telegram Bot Setup Search for @BotFather on Telegram Send /newbot and follow prompts Copy the bot token Send /setprivacy to BotFather and set to Disable 2. Google Gemini API Visit Google AI Studio Sign in and click "Get API Key" โ "Create API Key" Copy your API key Free tier: 60 requests per minute 3. Firecrawl API Visit Firecrawl.dev Sign up and go to Dashboard โ API Keys Copy your API key Free tier: 500 pages/month 4. Brave Search API Visit Brave Search API Sign up and create application Copy subscription key Free tier: 1,000 queries/month 5. LinkedIn API Visit LinkedIn Developers Create app with required details Request "Share on LinkedIn" product Copy Client ID and Client Secret Add redirect URL: https://your-n8n-domain.com/rest/oauth2-credential/callback 6. Google Sheets API Visit Google Cloud Console Enable Google Sheets API Create OAuth 2.0 Client ID Copy Client ID and Client Secret ๐ ๏ธ Installation Steps Phase 1: Preparation Install required community nodes Restart n8n after installation Create Google Sheet for logging Set up Telegram Bot Phase 2: Import and Configure Import workflow JSON in n8n Configure all API credentials Test each connection Phase 3: Customization Update authorized user ID in "Authorized Telegram Users" node Configure Google Sheets document ID Test Telegram connection Phase 4: Testing Test with different input types: URL only: https://example.com/article Topic only: artificial intelligence trends Mixed: AI trends https://example.com/ai-news ๐จ Customization Options Content Personalization Modify AI prompts to match your brand voice Adjust content length and formatting preferences Customize hashtag strategies and CTA approaches Configure approval workflow steps Source Integration Add additional search engines or content sources Integrate with RSS feeds or news APIs Connect to internal knowledge bases Customize web scraping parameters ๐ Security Features User Authorization**: Whitelist-based access control Secure Token Management**: Encrypted API key handling Data Privacy**: Secure processing of scraped content Audit Trail**: Complete logging of all user interactions ๐ฎ Future Expansion Possibilities This workflow serves as a foundation for: Performance Analytics Module**: LinkedIn engagement tracking Content Optimization Engine**: A/B testing and refinement Multi-Platform Publishing**: Expand to Twitter, Facebook, Instagram Advanced Scheduling**: Time-optimized posting Content Series Management**: Automated follow-ups ๐ก Why Choose This Workflow This represents a complete LinkedIn content automation solution that maintains quality and personal touch while dramatically reducing time and effort. Perfect for professionals who want to maximize LinkedIn impact without sacrificing content quality or spending hours on manual creation. Ready to transform your LinkedIn presence? Install this workflow and start automating your professional content creation today!
by ConnectSafely
Rewrite viral LinkedIn posts in your voice with AI and Telegram approval using Google Gemini Who's it for This workflow is designed for LinkedIn creators, personal brand builders, thought leaders, and content marketers who want to consistently create engaging content without starting from scratch. Perfect for professionals who admire viral posts from others but want to adapt those ideas to their own unique voice and style. If you're struggling to maintain a consistent posting schedule, looking for content inspiration, or want to repurpose trending ideas while keeping your authentic voice, this automation handles the creative heavy lifting while giving you full control over what gets published. How it works The workflow transforms viral LinkedIn posts into personalized content that matches your writing style, complete with AI-generated images, all controlled through Telegram. The process flow: Send any LinkedIn post URL to your Telegram bot Security check validates your Telegram user ID ConnectSafely.ai scrapes the original post content and engagement metrics Your custom persona profile is loaded (tone, phrases, formatting preferences) Google Gemini AI rewrites the post to match YOUR voice Gemini generates a professional, on-brand image for the post Preview is sent to Telegram for your review Approve or reject with a simple reply On approval, the post goes live on LinkedIn automatically Setup steps Step 1: Create a Telegram Bot Open Telegram and search for @BotFather Send /newbot and follow the prompts to create your bot Save the API token provided by BotFather Get your Telegram User ID by messaging @userinfobot Step 2: Configure Telegram Credentials in n8n Go to Credentials โ Add Credential โ Telegram API Paste your bot token from BotFather Save the credential Update all Telegram nodes to use this credential Step 3: Set Up Security Check Open the ๐ Security Check node Replace YOUR_TELEGRAM_USER_ID with your actual Telegram user ID This ensures only YOU can trigger the workflow Step 4: Configure ConnectSafely.ai API Sign up at ConnectSafely.ai Navigate to Settings โ API Keys in your dashboard Generate a new API key In n8n, go to Credentials โ Add Credential โ ConnectSafely API Paste your API key and save Connect this credential to the ๐ Scrape LinkedIn Post node Step 5: Configure Google Gemini API Go to Google AI Studio Create or select a project Generate an API key In n8n, go to Credentials โ Add Credential โ Google Gemini (PaLM) API Paste your API key and save Connect this credential to both: Google Gemini Chat Model node Generate an image node Step 6: Connect Your LinkedIn Account In n8n, go to Credentials โ Add Credential โ LinkedIn OAuth2 API Follow the OAuth flow to connect your LinkedIn account Connect this credential to the Create LinkedIn Post node Update the person parameter with your LinkedIn Person ID (URN) Step 7: Customize Your Persona Open the ๐ค Load Your Persona node Edit the PERSONA object to match YOUR writing style: Update name with your name Modify expertiseAreas with your topics Adjust commonPhrases with phrases you actually use Set preferredEmojis to your favorites Customize styleNotes to capture your unique voice Step 8: Activate the Workflow Save your workflow Toggle the workflow to Active Your Telegram bot is now ready to receive LinkedIn URLs Customization Persona Customization The ๐ค Load Your Persona node is where you define your unique voice. Key areas to customize: | Field | Description | Example | |-------|-------------|---------| | tone | Overall communication style | "Professional yet approachable, data-driven" | | voice | Perspective and personality | "First-person, authentic, vulnerable" | | formatting | Structure preferences | "Short paragraphs, emoji bullets, line breaks" | | hooks | Opening style | "Start with contrarian takes or personal stories" | | expertiseAreas | Your niche topics | ["SaaS growth", "Leadership", "Remote work"] | | commonPhrases | Signature expressions | ["Here's the truth:", "I learned this the hard way:"] | Image Generation The ๐ Create Image Prompt node generates the image prompt. Modify the style parameters to match your brand: Current style**: Modern, clean, corporate, vector art Customize**: Change to photography, illustrations, or abstract visuals Post Length In the persona configuration, adjust postLength: "short" - Quick insights (under 500 characters) "medium" - Standard posts (500-1500 characters) "long" - Deep dives (1500-3000 characters) AI Model Selection The workflow uses gemini-2.5-pro for text. You can switch to other models in the Google Gemini Chat Model node based on your needs. Requirements | Requirement | Details | |-------------|---------| | n8n Version | 1.0+ recommended | | Telegram Bot | Created via @BotFather | | ConnectSafely.ai Account | API key required | | Google AI Studio Account | Gemini API key required | | LinkedIn Account | OAuth2 connected in n8n | | Community Node | n8n-nodes-connectsafely-ai (self-hosted only) | โ ๏ธ Note: This workflow uses the ConnectSafely community node, which requires a self-hosted n8n instance. Use cases Content Repurposing**: Transform competitor or industry leader posts into your own perspective Consistent Posting**: Maintain a regular posting schedule without content creation burnout Style Consistency**: Ensure every post matches your established personal brand Trend Riding**: Quickly create content around viral topics while they're still relevant A/B Testing**: Test different approaches by approving or rejecting variations Troubleshooting Common Issues & Solutions Issue: Bot not responding to messages Solution**: Verify the Telegram webhook is active; check the Telegram Trigger node is properly configured Issue: "Profile not found" from ConnectSafely.ai Solution**: Ensure the LinkedIn URL is complete and public. Some posts on private profiles can't be scraped Issue: Image generation fails Solution**: Verify your Gemini API key has access to image generation models. Check quota limits in Google AI Studio Issue: LinkedIn post fails to publish Solution**: Confirm your LinkedIn OAuth2 credentials are valid and haven't expired. Re-authorize if needed Issue: AI generates posts that don't match your style Solution**: Be more specific in your persona configuration. Add more example phrases and detailed style notes Issue: Security check blocks your messages Solution**: Double-check your Telegram User ID is correctly entered (must be a number, not username) Documentation & Resources Official Documentation ConnectSafely.ai Docs**: https://connectsafely.ai/docs Google Gemini API**: https://ai.google.dev/docs Telegram Bot API**: https://core.telegram.org/bots/api LinkedIn API**: https://docs.microsoft.com/linkedin/ Support ConnectSafely Support**: support@connectsafely.ai n8n Community**: https://community.n8n.io Connect With Us Stay updated with the latest automation tips, LinkedIn strategies, and platform updates: LinkedIn**: linkedin.com/company/connectsafelyai YouTube**: youtube.com/@ConnectSafelyAI-v2x Instagram**: instagram.com/connectsafely.ai Facebook**: facebook.com/connectsafelyai X (Twitter)**: x.com/AiConnectsafely Bluesky**: connectsafelyai.bsky.social Mastodon**: mastodon.social/@connectsafely Need Custom Workflows? Looking to build sophisticated LinkedIn automation workflows tailored to your business needs? Contact our team for custom automation development, strategy consulting, and enterprise solutions. We specialize in: Multi-channel engagement workflows AI-powered personalization at scale Lead scoring and qualification automation CRM integration and data synchronization Custom reporting and analytics pipelines
by Danielle Gomes
Automatically classify incoming leads based on the sentiment of their message using Google Gemini, store them in Supabase by category, and send tailored WhatsApp messages via the official WhatsApp Cloud API. โ Use Case: This workflow is ideal for sales, onboarding, and customer support teams who want to: Understand the tone and urgency of each lead Prioritize hot leads instantly Send smart, automatic WhatsApp replies based on user sentiment ๐ง How it works: Capture lead via a Typeform webhook Clean and structure the data (name, email, message, etc.) Run sentiment analysis using Google Gemini to classify the message as: Positive โ Hot Lead Neutral โ Warm Lead Negative โ Cold Lead Store lead data in Supabase under the corresponding category Merge data to unify flow paths Send WhatsApp message using the official WhatsApp Cloud API, with a custom reply for each sentiment result ๐ง Tools used: Typeform (incoming data) Google Gemini (AI-based sentiment classification) Supabase (database) WhatsApp Cloud API (response automation) ๐ท Tags: AI, Sentiment Analysis, Lead Qualification, Supabase, WhatsApp, Gemini, Typeform, CRM, Automation, Customer Engagement
by Solomon
This n8n template demonstrates how to obtain token usage from AI Agents and places the data into a spreadsheet that calculates the estimated cost of the execution. Obtaining the token usage from AI Agents is tricky, because it doesn't provide all the data from tool calls. This workflow taps into the workflow execution metadata to extract token usage information. Works well with OpenAI, Google and Anthropic. Other LLM providers might need small tweaks. How it works The AI Agent executes and then calls a subworkflow to calculate the token usage. The data is stored in Google Sheets The spreadsheet has formulas to calculate the estimated cost of the execution. How to use The AI Agent is used as an example. Feel free to replace this with other agents you have. Call the subworkflow AFTER all the other branches have finished executing. Requirements LLM account (OpenAI, Gemini...) for API usage. Google Drive and Sheets credentials n8n API key of your instance
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.
by Mary Newhauser
RAG over a PDF with Weaviate This workflow allows you to upload a PDF file and ask questions about it using the Question and Answer Chain and the Weaviate Vector Store nodes. Who it's for This workflow is the simplest possible implementation of RAG with Weaviate in n8n. It's intended to act as an extendable template for RAG over your own documents. Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use OpenAI, but feel free to switch out the models as you like. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How it works Part 1: Manually upload data In this example, we manually upload a 100+ page article from arXiv called "A Survey of Large Language Models". But you can replace this with your own more advanced data pipeline, if you wish. Part 2: Embed and load data into Weaviate collection Here, we generate embeddings for the full-text of the article and store them in Weaviate. Part 3: Perform RAG over PDF file with Weaviate In this part of the workflow, you can enter your query by running the Chat Node and get a RAG response grounded in context via the Question and Answer Chain node. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Upload a PDF file you want to ask questions about. Execute the rest of the workflow.
by Obsidi8n
This workflow converts any n8n workflow outputs into Markdown notes that are accessible in your Obsidian Vault through Google Drive synchronization. Setup Requirements Create a designated folder in Google Drive (Desktop). Create a symbolic link between this folder and a new target folder in your Obsidian Vault. Configure Google Drive n8n node settings. Send the output of any workflow to the trigger, and the notes will appear in your Vault folder. Optional Features You can use AI agents to: Write notes in your preferred format (e.g., Zettelkasten). Compose YAML front matter. Suggest tags. Use Cases Convert RSS feed items to notes. Create notes from YouTube video transcripts. Transform tasks in Slack messages into Obsidian tasks. (Requires setting up a corresponding workflow, e.g., RSS trigger, YouTube transcriber, or Slack bot.)
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.