by Tony Paul
How it works ++Download the google sheet here++ and replace this with the googles sheet node: Google sheet , upload to google sheets and replace in the google sheets node. Scheduled trigger: Runs once a day at 8 AM (server time). Fetch product list: Reads your “master” sheet (product_url + last known price) from Google Sheets. Loop with delay: Iterates over each row (product) one at a time, inserting a short pause (20 s) between HTTP requests to avoid blocking. Scrape current price: Loads each product_url, extracts the current price via a simple CSS selector. Compare & normalize: Compares the newly scraped price against the “last_price” from your sheet, calculates percentage change, and tags items where price_changed == true. On price change: Send alert: Formats a Telegram message (“Price Drop” or “Price Hike”) and pushes it to your configured chat. Log history: Appends a new row to a separate “price_tracking” tab with timestamp, old price, new price, and % change. Update master sheet: After a 1 min pause, writes the updated current_price back to your “master” sheet so future runs use it as the new baseline. Set up step Google Sheets credentials (~5 min) Create a Google Sheets OAuth credential in n8n. Copy your sheet’s ID and ensure you have two tabs: product_data (columns: product_url, price) price_tracking (columns: timestamp, product_url, last_price, current_price, price_diff_pct, price_changed) Paste the sheet ID into both Google Sheets nodes (“Read” and “Append/Update”). Telegram credentials (~5 min) Create a Telegram Bot token via BotFather. Copy your chat_id (for your target group or personal chat). Add those credentials to n8n and drop them into the “Telegram” node. Workflow parameters (~5 min) Verify the schedule in the Schedule Trigger node is set to 08:00 (or adjust to your preferred run time). In the Loop Over Items node, confirm “Batch Size” is 1 (to process one URL at a time). Adjust the Delay to avoid Request Blocking node if your site requires a longer pause (default is 20 s). In the Parse Data From The HTML Page node, double-check the CSS selector matches how prices appear on your target site. Once credentials are in place and your sheet tabs match the expected column names, the flow should be ready to activate. Total setup time is under 15 minutes—detailed notes are embedded as sticky comments throughout the workflow to help you tweak selectors, change timeouts, or adjust sheet names without digging into code.
by Diego
What this template does This workflow will read your Zotero Library and extract Meta Data from the articles of one collection in your bibliography. You can personalize the output for optimized results. How it works Mainly, follow the instructions in the Post it notes: Go to https://www.zotero.org/settings/security and find your USER ID (It's right under the APPLICATIONS Section. On the same website, create a New Private Key. In the "Collections" Node, select Generic Credential Type > Header Auth > Create New Credential using: NAME: Zotero-API-Key VALUE: [Your Private Key] Run your Flow to check if it works and open the "Select Collection" node. See the Results of the previous node as TABLE and copy the "KEY" of the collection you want to use. After that you should have a working flow that reads your bibliography. You can edit or delete the last 2 nodes to personalize your results (Filter and Edit Fields)
by Joseph LePage
🌐 Confluence Page AI Chatbot Workflow This n8n workflow template enables users to interact with an AI-powered chatbot designed to retrieve, process, and analyze content from Confluence pages. By leveraging Confluence's REST API and an AI agent, the workflow facilitates seamless communication and contextual insights based on Confluence page data. 🌟 How the Workflow Works 🔗 Input Chat Message The workflow begins when a user sends a chat message containing a query or request for information about a specific Confluence page. 📄 Data Retrieval The workflow uses the Confluence REST API to fetch page details by ID, including its body in the desired format (e.g., storage, view). The retrieved HTML content is converted into Markdown for easier processing. 🤖 AI Agent Interaction An AI-powered agent processes the Markdown content and provides dynamic responses to user queries. The agent is context-aware, ensuring accurate and relevant answers based on the Confluence page's content. 💬 Dynamic Responses Users can interact with the chatbot to: Summarize the page's content. Extract specific details or sections. Clarify complex information. Analyze key points or insights. 🚀 Use Cases 📚 Knowledge Management**: Quickly access and analyze information stored in Confluence without manually searching through pages. 📊 Team Collaboration**: Facilitate discussions by summarizing or explaining page content during team chats. 🔍 Research and Documentation**: Extract critical insights from large documentation repositories for efficient decision-making. ♿ Accessibility**: Provide an alternative way to interact with Confluence content for users who prefer conversational interfaces. 🛠️ Resources for Getting Started Confluence API Setup: Generate an API token for authentication via Atlassian's account management portal. Refer to Confluence's REST API documentation for endpoint details and usage instructions. n8n Installation: Install n8n locally or on a server using the official installation guide. AI Agent Configuration: Set up OpenAI or other supported language models for natural language processing.
by Anderson Adelino
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key 1. Configure Supabase Vector Store First, create a table in Supabase with vector support: CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; 2. Set up credentials Add the following credentials in n8n: OpenAI**: Add your OpenAI API key Supabase**: Add your Supabase URL and service role key Google Drive**: Connect your Google account Cohere**: Add your Cohere API key 3. Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node 4. Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents 5. Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types**: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases**: Create separate vector stores for different topics Custom prompts**: Modify the agent's system message for specific use cases Language models**: Switch between different OpenAI models or use other LLM providers Reranking settings**: Adjust the top-k parameter for more or fewer search results Memory window**: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support**: Create bots that answer questions from product documentation HR Assistant**: Build assistants that help employees find information in company policies Educational Tutor**: Develop tutors that answer questions from course materials Research Assistant**: Create tools that help researchers find relevant information in papers Legal Helper**: Build assistants that search through legal documents and contracts
by Samir Saci
Tags: Sustainability, Web Scraping, OpenAI, Google Sheets, Newsletter, Marketing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. I use this workflow to bring awareness about sustainability and promote my business by delivering automated daily news digests. > Promote your business with a fully automated newsletter powered by AI! This n8n workflow scrapes articles from the official EU news website and sends a daily curated digest, highlighting only the most relevant sustainability news. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Business owners** who want to promote their service or products with a fully automated newsletter Sustainability professionals** staying informed on EU climate news Consultants and analysts** working on CSRD, Green Deal, or ESG initiatives Corporate communications teams** tracking relevant EU activity Media curators** building newsletters What does it do? This n8n workflow: ⏰ Triggers automatically every morning 🌍 Scrapes articles from the EU Commission News Portal 🧠 Uses OpenAI GPT-4o to classify each article for sustainability relevance 📄 Stores the results in a Google Sheet for tracking 🧾 Generates a beautiful HTML digest email, including titles, summaries, and images 📬 Sends the digest via Gmail to your mailing list How it works Trigger at 08:30 every morning Scrape and extract article blocks from the EU news site Use OpenAI to decide if articles are sustainability-related Store relevant entries in Google Sheets Generate HTML email with a professional layout and logo Send the digest via Gmail to a configured recipient list What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 or GPT-4o access A Gmail OAuth credential setup Follow the Guide! Follow the sticky notes inside the workflow or check out my step-by-step tutorial on how to configure and deploy it. 🎥 Watch My Tutorial Notes You can customize the system prompt to adjust how AI classifies “sustainability” Works well for tracking updates relevant to climate action, green transition, and circular economy This workflow was built using n8n version 1.85.4 Submitted: April 24, 2025
by PollupAI
This n8n workflow automates the import of your Google Keep notes into a structured Google Sheet, using Google Drive, OpenAI for AI-powered processing, and JSON file extraction. It's perfect for users who want to turn exported Keep notes into a searchable, filterable spreadsheet – optionally enhanced by AI summarization or transformation. Who is this for? Researchers, knowledge workers, and digital minimalists who rely on Google Keep and want to better organize or analyze their notes. Anyone who regularly exports Google Keep notes and wants a clean, automated workflow to store them in Google Sheets. Users looking to apply AI to process, summarize, or extract insights from raw notes. What problem is this workflow solving? Exporting Google Keep notes via Google Takeout gives you unstructured .json files that are hard to read and manage. This workflow solves that by: Filtering relevant .json files Extracting note content (Optionally) applying AI to analyze or summarize each note Writing the result into a structured Google Sheet What this workflow does Google Drive Search: Looks for .json files inside a specified "Keep" folder. Loop: Processes files in batches of 10. File Filtering: Filters by .json extension. Download + Extract: Downloads each file and extracts note content from JSON. Optional Filtering: Only keeps non-archived notes or those meeting content criteria. AI Processing (optional): Uses OpenAI to summarize or transform the note content. Prepare for Export: Maps note fields to be written. Google Sheets: Appends or updates the target sheet with the note data. Setup Export your Google Keep notes using Google Takeout: Deselect all, then choose only Google Keep. Choose “Send download link via email”. Unzip the downloaded archive and upload the .json files to your Google Drive. Connect Google Drive, OpenAI, and Google Sheets in n8n. Set the correct folder path for your notes in the “Search in ‘Keep’ folder” node. Point the Google Sheet node to your spreadsheet How to customize this workflow to your needs Skip AI processing: If you don't need summaries or transformations, remove or disable the OpenAI Chat Model node. Filter criteria: Customize the Filter node to extract only recent notes, or those containing specific keywords. AI prompts: Edit the Tools Agent or Chat Model node to instruct the AI to summarize, extract tasks, categorize notes, etc. Field mapping: Adjust the “Set fields for export” node to control what gets written to the spreadsheet. Use this template to build a powerful knowledge extraction tool from your Google Keep archive – ideal for backups, audits, or data-driven insights.
by Gofive
Template: Create an AI Knowledge Base Chatbot with Google Drive and OpenAI GPT (Venio/Salesbear) 📋 Template Overview This comprehensive n8n workflow template creates an intelligent AI chatbot that automatically transforms your Google Drive documents into a searchable knowledge base. The chatbot uses OpenAI's GPT models to provide accurate, context-aware responses based exclusively on your uploaded documents, making it perfect for customer support, internal documentation, and knowledge management systems. 🎯 What This Template Does Automated Knowledge Processing Real-time Document Monitoring**: Automatically detects when files are added or updated in your designated Google Drive folder Intelligent Document Processing**: Converts PDFs, text files, and other documents into searchable vector embeddings Smart Text Chunking**: Breaks down large documents into optimally-sized chunks for better AI comprehension Vector Storage**: Creates a searchable knowledge base that the AI can query for relevant information AI-Powered Chat Interface Webhook Integration**: Receives questions via HTTP requests from any external platform (Venio/Salesbear) Contextual Responses**: Maintains conversation history for natural, flowing interactions Source-Grounded Answers**: Provides responses based strictly on your document content, preventing hallucinations Multi-platform Support**: Works with any chat platform that can send HTTP requests 🔧 Pre-conditions and Requirements Required API Accounts and Permissions 1. Google Drive API Access Google Cloud Platform account Google Drive API enabled OAuth2 credentials configured Read access to your target Google Drive folder 2. OpenAI API Account Active OpenAI account with API access Sufficient API credits for embeddings and chat completions API key with appropriate permissions 3. n8n Instance n8n cloud account or self-hosted instance Webhook functionality enabled Ability to install community nodes (LangChain nodes) 4. Target Chat Platform (Optional) API credentials for your chosen chat platform Webhook capability or API endpoints for message sending Required Permissions Google Drive**: Read access to folder contents and file downloads OpenAI**: API access for text-embedding-ada-002 and gpt-4o-mini models External Platform**: API access for sending/receiving messages (if integrating with existing chat systems) 🚀 Detailed Workflow Operation Phase 1: Knowledge Base Creation File Monitoring: Two trigger nodes continuously monitor your Google Drive folder for new files or updates Document Discovery: When changes are detected, the workflow searches for and identifies the modified files Content Extraction: Downloads the actual file content from Google Drive Text Processing: Uses LangChain's document loader to extract text from various file formats Intelligent Chunking: Splits documents into overlapping chunks (configurable size) for optimal AI processing Vector Generation: Creates embeddings using OpenAI's text-embedding-ada-002 model Storage: Stores vectors in an in-memory vector store for instant retrieval Phase 2: Chat Interaction Question Reception: Webhook receives user questions in JSON format Data Extraction: Parses incoming data to extract chat content and session information AI Processing: AI Agent analyzes the question and determines relevant context Knowledge Retrieval: Searches the vector store for the most relevant document sections Response Generation: OpenAI generates responses based on found content and conversation history Authentication: Validates the request using token-based authentication Response Delivery: Sends the answer back to the originating platform 📚 Usage Instructions After Setup Adding Documents to Your Knowledge Base Upload Files: Simply drag and drop documents into your configured Google Drive folder Supported Formats: PDFs, TXT, DOC, DOCX, and other text-based formats Automatic Processing: The workflow will automatically detect and process new files within minutes Updates: Modify existing files, and the knowledge base will automatically update Integrating with Your Chat Platform Webhook URL: Use the generated webhook URL to send questions POST https://your-n8n-domain/webhook/your-custom-path Content-Type: application/json { "body": { "Data": { "ChatMessage": { "Content": "What are your business hours?", "RoomId": "user-123-session", "Platform": "web", "User": { "CompanyId": "company-456" } } } } } Response Format: The chatbot returns structured responses that your platform can display Testing Your Chatbot Initial Test: Send a simple question about content you know exists in your documents Context Testing: Ask follow-up questions to test conversation memory Edge Cases: Try questions about topics not in your documents to verify appropriate responses Performance: Monitor response times and accuracy 🎨 Customization Options System Message Customization Modify the AI Agent's system message to match your brand and use case: You are a [YOUR_BRAND] customer support specialist. You provide helpful, accurate information based on our documentation. Always maintain a [TONE] tone and [SPECIFIC_GUIDELINES]. Response Behavior Customization Tone and Voice**: Adjust from professional to casual, formal to friendly Response Length**: Configure for brief answers or detailed explanations Fallback Messages**: Customize what the bot says when it can't find relevant information Language Support**: Adapt for different languages or technical terminologies Technical Configuration Options Document Processing Chunk Size**: Adjust from 1000 to 4000 characters based on your document complexity Overlap**: Modify overlap percentage for better context preservation File Types**: Add support for additional document formats AI Model Configuration Model Selection**: Switch between gpt-4o-mini (cost-effective) and gpt-4 (higher quality) Temperature**: Adjust creativity vs. factual accuracy (0.0 to 1.0) Max Tokens**: Control response length limits Memory and Context Conversation Window**: Adjust how many previous messages to remember Session Management**: Configure session timeout and user identification Context Retrieval**: Tune how many document chunks to consider per query Integration Customization Authentication Methods Token-based**: Default implementation with bearer tokens API Key**: Simple API key validation OAuth**: Full OAuth2 implementation for secure access Custom Headers**: Validate specific headers or signatures Response Formatting JSON Structure**: Customize response format for your platform Markdown Support**: Enable rich text formatting in responses Error Handling**: Define custom error messages and codes 🎯 Specific Use Case Examples Customer Support Chatbot Scenario: E-commerce company with product documentation, return policies, and FAQ documents Setup: Upload product manuals, policy documents, and common questions to Google Drive Customization: Professional tone, concise answers, escalation triggers for complex issues Integration: Website chat widget, mobile app, or customer portal Internal HR Knowledge Base Scenario: Company HR department with employee handbook, policies, and procedures Setup: Upload HR policies, benefits information, and procedural documents Customization: Friendly but professional tone, detailed policy explanations Integration: Internal Slack bot, employee portal, or HR ticketing system Technical Documentation Assistant Scenario: Software company with API documentation, user guides, and troubleshooting docs Setup: Upload API docs, user manuals, and technical specifications Customization: Technical tone, code examples, step-by-step instructions Integration: Developer portal, support ticket system, or documentation website Educational Content Helper Scenario: Educational institution with course materials, policies, and student resources Setup: Upload syllabi, course content, academic policies, and student guides Customization: Helpful and encouraging tone, detailed explanations Integration: Learning management system, student portal, or mobile app Healthcare Information Assistant Scenario: Medical practice with patient information, procedures, and policy documents Setup: Upload patient guidelines, procedure explanations, and practice policies Customization: Compassionate tone, clear medical explanations, disclaimer messaging Integration: Patient portal, appointment system, or mobile health app 🔧 Advanced Customization Examples Multi-Language Support // In Edit Fields node, detect language and route accordingly const language = $json.body.Data.ChatMessage.Language || 'en'; const systemMessage = { 'en': 'You are a helpful customer support assistant...', 'es': 'Eres un asistente de soporte al cliente útil...', 'fr': 'Vous êtes un assistant de support client utile...' }; Department-Specific Routing // Route questions to different knowledge bases based on department const department = $json.body.Data.ChatMessage.Department; const vectorStoreKey = vector_store_${department}; Advanced Analytics Integration // Track conversation metrics const analytics = { userId: $json.body.Data.ChatMessage.User.Id, timestamp: new Date().toISOString(), question: $json.body.Data.ChatMessage.Content, response: $json.response, responseTime: $json.processingTime }; 📊 Performance Optimization Tips Document Management Optimal File Size**: Keep documents under 10MB for faster processing Clear Structure**: Use headers and sections for better chunking Regular Updates**: Remove outdated documents to maintain accuracy Logical Organization**: Group related documents in subfolders Response Quality System Message Refinement**: Regularly update based on user feedback Context Tuning**: Adjust chunk size and overlap for your specific content Testing Framework**: Implement systematic testing for response accuracy User Feedback Loop**: Collect and analyze user satisfaction data Cost Management Model Selection**: Use gpt-4o-mini for cost-effective responses Caching Strategy**: Implement response caching for frequently asked questions Usage Monitoring**: Track API usage and set up alerts Batch Processing**: Process multiple documents efficiently 🛡️ Security and Compliance Data Protection Document Security**: Ensure sensitive documents are properly secured Access Control**: Implement proper authentication and authorization Data Retention**: Configure appropriate data retention policies Audit Logging**: Track all interactions for compliance Privacy Considerations User Data**: Minimize collection and storage of personal information Session Management**: Implement secure session handling Compliance**: Ensure adherence to relevant privacy regulations Encryption**: Use HTTPS for all communications 🚀 Deployment and Scaling Production Readiness Environment Variables**: Use environment variables for sensitive configurations Error Handling**: Implement comprehensive error handling and logging Monitoring**: Set up monitoring for workflow health and performance Backup Strategy**: Ensure document and configuration backups Scaling Considerations Load Testing**: Test with expected user volumes Rate Limiting**: Implement appropriate rate limiting Database Scaling**: Consider external vector database for large-scale deployments Multi-Instance**: Configure for multiple n8n instances if needed 📈 Success Metrics and KPIs Quantitative Metrics Response Accuracy**: Percentage of correct answers Response Time**: Average time from question to answer User Satisfaction**: Rating scores and feedback Usage Volume**: Questions per day/week/month Cost Efficiency**: Cost per interaction Qualitative Metrics User Feedback**: Qualitative feedback on response quality Use Case Coverage**: Percentage of user needs addressed Knowledge Gaps**: Identification of missing information Conversation Quality**: Natural flow and context understanding
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Keith Rumjahn
Who's this for? Anyone who wants to improve the SEO of their website Umami users who want insights on how to improve their site SEO managers who need to generate reports weekly Case study Watch youtube tutorial here Get my SEO A.I. agent system here You can read more about how this works here. How it works This workflow calls the Umami API to get data Then it sends the data to A.I. for analysis It saves the data and analysis to Baserow How to use this Input your Umami credentials Input your website property ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Date, Summary, Top Pages, Blog (name of your blog). Future development Use this as a template. There's alot more Umami stats you can pull from the API. Change the A.I. prompt to give even more detailed analysis. Created by Rumjahn
by Peter Zendzian
This n8n template demonstrates how to automate comprehensive web research using multiple AI models to find, analyze, and extract insights from authoritative sources. Use cases are many: Try automating competitive analysis research, finding latest regulatory guidance from official sources, gathering authoritative content for reports, or conducting market research on industry developments! Good to know Each research query typically costs $0.08-$0.34 depending on the number of sources found and processed. The workflow includes smart filtering to minimize unnecessary API calls. The workflow requires multiple AI services and may need additional setup time compared to simpler templates. Qdrant storage is optional and can be removed without affecting performance. How it works Your research question gets transformed into optimized Google search queries that target authoritative sources while filtering out low-quality sites. Apify's RAG Web Browser scrapes the content and converts pages to clean markdown format. Claude Sonnet 4 evaluates each article for relevance and quality before full processing. Articles that pass the filter get analyzed in parallel - one pipeline creates focused summaries while another extracts specific claims and evidence. GPT-4.1 Mini ranks all findings and presents the top 3 most valuable insights and summaries. All processed content gets stored in your Qdrant vector database to prevent duplicate processing and enable future reference. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, form submissions, or scheduled research. You can modify the configuration variables in the Set Node to customize Qdrant URLs, collection names, and quality thresholds for your specific needs. Requirements OpenAI API account for GPT-4.1 Mini (query optimization, summarization, ranking) Anthropic API account for Claude Sonnet 4 (content filtering) Apify account for web scraping capabilities Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Customizing this workflow Web research automation can be adapted for many specialized use cases. Try focusing on specific domains like legal research (targeting .gov and .edu sites), medical research (PubMed and health authorities), or financial analysis (SEC filings and analyst reports).
by Niklas Hatje
Use Case This workflow retrieves all members of a Discord server or guild who have a specific role. Due to limitations in the Discord API, it only returns a limited number of users per call. To overcome this, the workflow uses Google Sheets to track which user we received last to return all Members (of a certain role) from a Discord server in batches of 100 members. Setup Add your Google Sheets and Discord credentials. Create a Google Sheets document that contains ID as a column. We're using this to remember which member we received last. Edit the fields in the setup node Setup: Edit this to get started. You can read up on how to get the Discord IDs via this link. Link to your Discord server in the Discord nodes Activate the workflow Call the production webhook URL in your browser Requirements Admin rights in the Discord server and access to the developer portal of discord Google Sheets Minimum n8n version 1.28.0 Potential Use cases Writing a direct message to all members of a certain role Analysing user growth on Discord regularly Analysing role distributions on Discord regularly Saving new members in a Discord ... Keywords Discord API, Getting all members from Discord via API, Google Sheets and Discord automation, How to get all Discord members via API
by M Shehroz Sajjad
What problem does it solve? Manual candidate screening is time-consuming and inconsistent. This workflow automates initial interviews, providing 24/7 availability, consistent questioning, and objective assessments for every candidate. Who is it for? HR teams handling high-volume recruiting Small businesses without dedicated recruiters Companies scaling their hiring process Remote-first organizations needing asynchronous screening What this workflow does Creates AI interviewers from job descriptions that conduct natural conversations with candidates via BeyondPresence Agents. Automatically analyzes interviews and saves structured assessments to Google Sheets. Setup Copy template sheet: BeyondPresence HR Interview System Template Add credentials: BeyondPresence API Key OpenAI API Google Sheets Configure webhook in BeyondPresence dashboard: https://[your-n8n-instance]/webhook/beyondpresence-hr-interviews Paste job description and run setup Share generated link with candidates How it works Agent Creation: Converts job description into conversational AI interviewer Interview Conduct: Candidates chat naturally with AI via shared link Webhook Trigger: Completed interviews sent to n8n AI Analysis: OpenAI evaluates responses against job requirements Results Storage: Assessments saved to Google Sheets with scores and recommendations Resources Google Sheets Template BeyondPresence Documentation Webhook Setup Guide Example Use Case Tech startup screens 200 applicants for engineering role. Creates AI interviewer in 2 minutes, sends link to all candidates. Receives structured assessments within 24 hours, identifying top 20 candidates for human interviews. Reduces initial screening time from 2 weeks to 2 days.