by Tony Paul
How it works ++Download the google sheet here++ and replace this with the googles sheet node: Google sheet , upload to google sheets and replace in the google sheets node. Scheduled trigger: Runs once a day at 8 AM (server time). Fetch product list: Reads your “master” sheet (product_url + last known price) from Google Sheets. Loop with delay: Iterates over each row (product) one at a time, inserting a short pause (20 s) between HTTP requests to avoid blocking. Scrape current price: Loads each product_url, extracts the current price via a simple CSS selector. Compare & normalize: Compares the newly scraped price against the “last_price” from your sheet, calculates percentage change, and tags items where price_changed == true. On price change: Send alert: Formats a Telegram message (“Price Drop” or “Price Hike”) and pushes it to your configured chat. Log history: Appends a new row to a separate “price_tracking” tab with timestamp, old price, new price, and % change. Update master sheet: After a 1 min pause, writes the updated current_price back to your “master” sheet so future runs use it as the new baseline. Set up step Google Sheets credentials (~5 min) Create a Google Sheets OAuth credential in n8n. Copy your sheet’s ID and ensure you have two tabs: product_data (columns: product_url, price) price_tracking (columns: timestamp, product_url, last_price, current_price, price_diff_pct, price_changed) Paste the sheet ID into both Google Sheets nodes (“Read” and “Append/Update”). Telegram credentials (~5 min) Create a Telegram Bot token via BotFather. Copy your chat_id (for your target group or personal chat). Add those credentials to n8n and drop them into the “Telegram” node. Workflow parameters (~5 min) Verify the schedule in the Schedule Trigger node is set to 08:00 (or adjust to your preferred run time). In the Loop Over Items node, confirm “Batch Size” is 1 (to process one URL at a time). Adjust the Delay to avoid Request Blocking node if your site requires a longer pause (default is 20 s). In the Parse Data From The HTML Page node, double-check the CSS selector matches how prices appear on your target site. Once credentials are in place and your sheet tabs match the expected column names, the flow should be ready to activate. Total setup time is under 15 minutes—detailed notes are embedded as sticky comments throughout the workflow to help you tweak selectors, change timeouts, or adjust sheet names without digging into code.
by David Ashby
Complete MCP server exposing 2 BIN Lookup API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BIN Lookup API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BIN Lookup API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bintable.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Balance (1 endpoints) • GET /balance: Check Balance 🔧 {Bin} (1 endpoints) • GET /{bin}: Lookup for bin 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BIN Lookup API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Autonomous Work
This workflow exports every table in a base as its own CSV, saves the files in a time-stamped folder in Amazon S3, pings you on Slack, and optionally prunes older copies. You get an automated weekly backup that is easy to inspect or re-import as needed. You can easily swap the S3 node for the storage provider of your choice. ++How it works++ Weekly Backup Schedule trigger fires weekly Sets and formats the week ex. [2025-W12] Create a folder in S3 bucket with the week Loops through all tables in Airtable base creating CSVs and uploading to the new path Slack message is sent on completion Monthly Prune Schedule trigger fires weekly Sets a cut-off date 4 weeks in the past Lists folders in S3 Deletes all folders > 4 weeks old ++Setup Steps++ Clone workflow Swap credentials for Airtable, AWS, and Slack Ensure AWS credential has appropriate IAM policy to manage bucket & objects Set workflow to "Active"
by Davide
This Workflow simulates an AI-powered phone agent with two main functions: 📅 Appointment Booking – It can schedule appointments directly into Google Calendar. 🧠 RAG-based Information Retrieval – It provides answers using a Retrieval-Augmented Generation (RAG) system. For example, it can respond to questions such as store opening hours, return policies, or product details. The guide also explains how to purchase a dedicated phone number (with a +1 prefix) and link it to the AI agent. This setup is cost-effective, as it uses a FREE $10 credit to operate without additional charges in the beginning. ✨ Advantages 🕐 24/7 Availability** – The AI agent can answer calls and assist customers at any time. 🤖 Automation** – It reduces the workload on human staff by handling repetitive tasks like appointment scheduling and FAQ responses. 🔌 Easy Integration** – Built with n8n, it’s flexible and customizable for various platforms and tools. 💸 Low-cost Setup** – Using the free credit, businesses can get started without an upfront investment. 📦 Use Cases 🛍 E-commerce** – Answer common product questions or order inquiries. 🏬 Retail Stores** – Provide store hours, address info, and return policies. 🍽 Restaurants** – Take reservations or share menu information. 💼 Service Providers** – Book appointments or consultations. 📞 Any Local Business** – Offer phone support without needing a live operator. How It Works This Workflow simulates an AI-powered phone agent with two primary functions: Appointment Booking The workflow captures call events (e.g., call_ended or call_analyzed) and extracts key details (transcript, caller info, duration, etc.). Using OpenAI, it summarizes the conversation and parses structured data (e.g., names, contact info, dates). For scheduling, it converts user-provided dates into Google Calendar-compatible formats and creates events automatically. RAG-Based Information Retrieval When a query is received (e.g., store hours, product details), the workflow retrieves relevant information from a Qdrant vector store. An AI agent processes the query using the retrieved data and responds via a webhook, ensuring accurate, context-aware answers. Set Up Steps Prepare Qdrant Vector Store Create/refresh a Qdrant collection (via HTTP requests). Upload and vectorize documents (e.g., from Google Drive) using OpenAI embeddings. Configure RetellAI Agent Sign up for RetellAI, create an agent, and set the webhook URLs (n8n_call for call events, n8n_rag_function for RAG queries). Purchase a Twilio phone number and link it to the agent. n8n Workflow Setup Connect OpenAI, Qdrant, Google Calendar, and Telegram nodes with credentials. Customize prompts for summarization, date parsing, and RAG responses. Test the workflow to ensure data flows from call events → processing → actions (e.g., calendar bookings, Telegram alerts). Deploy Trigger the workflow via RetellAI webhooks during calls. Monitor outputs (e.g., call summaries in Telegram, calendar events). Note: Replace placeholders (e.g., QDRANTURL, COLLECTION, CHAT_ID) with actual values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Aditya Gaur
Who is this template for? This template is designed for teams who need to automate data retrieval from SharePoint lists using n8n. It is ideal for users who want to authenticate via OAuth and then use the token to access SharePoint API endpoints, pulling in list data directly into n8n. How it works The template first generates an OAuth token using the Microsoft OAuth API. This token is then used to authenticate requests to the SharePoint List API, allowing the workflow to fetch data from a specified SharePoint list. By following the n8n workflow, the user can configure the necessary credentials and endpoints to automate SharePoint data access securely. Setup steps Step 1: Replace {tenant_id}, {client_id}, and {client_secret} with your Azure AD details for OAuth authentication. Step 2: Specify the SharePoint list API endpoint in the template (under "SharePoint List Fetch" node). Step 3: Configure the SharePoint list URL and make adjustments for specific data fields if necessary.
by Keith Rumjahn
Who's this for? Anyone who wants to improve the SEO of their website Umami users who want insights on how to improve their site SEO managers who need to generate reports weekly Case study Watch youtube tutorial here Get my SEO A.I. agent system here You can read more about how this works here. How it works This workflow calls the Umami API to get data Then it sends the data to A.I. for analysis It saves the data and analysis to Baserow How to use this Input your Umami credentials Input your website property ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Date, Summary, Top Pages, Blog (name of your blog). Future development Use this as a template. There's alot more Umami stats you can pull from the API. Change the A.I. prompt to give even more detailed analysis. Created by Rumjahn
by David Ashby
🛠️ Demio Tool MCP Server Complete MCP server exposing all Demio Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Demio Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Demio Tool tool with full error handling 📋 Available Operations (4 total) Every possible Demio Tool operation is included: 📅 Event (3 operations) • Get an event • Get many events • Register an event 🔧 Report (1 operations) • Get a report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Demio Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Demio Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Joseph LePage
🌐 Confluence Page AI Chatbot Workflow This n8n workflow template enables users to interact with an AI-powered chatbot designed to retrieve, process, and analyze content from Confluence pages. By leveraging Confluence's REST API and an AI agent, the workflow facilitates seamless communication and contextual insights based on Confluence page data. 🌟 How the Workflow Works 🔗 Input Chat Message The workflow begins when a user sends a chat message containing a query or request for information about a specific Confluence page. 📄 Data Retrieval The workflow uses the Confluence REST API to fetch page details by ID, including its body in the desired format (e.g., storage, view). The retrieved HTML content is converted into Markdown for easier processing. 🤖 AI Agent Interaction An AI-powered agent processes the Markdown content and provides dynamic responses to user queries. The agent is context-aware, ensuring accurate and relevant answers based on the Confluence page's content. 💬 Dynamic Responses Users can interact with the chatbot to: Summarize the page's content. Extract specific details or sections. Clarify complex information. Analyze key points or insights. 🚀 Use Cases 📚 Knowledge Management**: Quickly access and analyze information stored in Confluence without manually searching through pages. 📊 Team Collaboration**: Facilitate discussions by summarizing or explaining page content during team chats. 🔍 Research and Documentation**: Extract critical insights from large documentation repositories for efficient decision-making. ♿ Accessibility**: Provide an alternative way to interact with Confluence content for users who prefer conversational interfaces. 🛠️ Resources for Getting Started Confluence API Setup: Generate an API token for authentication via Atlassian's account management portal. Refer to Confluence's REST API documentation for endpoint details and usage instructions. n8n Installation: Install n8n locally or on a server using the official installation guide. AI Agent Configuration: Set up OpenAI or other supported language models for natural language processing.
by Samir Saci
Tags: Sustainability, Web Scraping, OpenAI, Google Sheets, Newsletter, Marketing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. I use this workflow to bring awareness about sustainability and promote my business by delivering automated daily news digests. > Promote your business with a fully automated newsletter powered by AI! This n8n workflow scrapes articles from the official EU news website and sends a daily curated digest, highlighting only the most relevant sustainability news. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Business owners** who want to promote their service or products with a fully automated newsletter Sustainability professionals** staying informed on EU climate news Consultants and analysts** working on CSRD, Green Deal, or ESG initiatives Corporate communications teams** tracking relevant EU activity Media curators** building newsletters What does it do? This n8n workflow: ⏰ Triggers automatically every morning 🌍 Scrapes articles from the EU Commission News Portal 🧠 Uses OpenAI GPT-4o to classify each article for sustainability relevance 📄 Stores the results in a Google Sheet for tracking 🧾 Generates a beautiful HTML digest email, including titles, summaries, and images 📬 Sends the digest via Gmail to your mailing list How it works Trigger at 08:30 every morning Scrape and extract article blocks from the EU news site Use OpenAI to decide if articles are sustainability-related Store relevant entries in Google Sheets Generate HTML email with a professional layout and logo Send the digest via Gmail to a configured recipient list What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 or GPT-4o access A Gmail OAuth credential setup Follow the Guide! Follow the sticky notes inside the workflow or check out my step-by-step tutorial on how to configure and deploy it. 🎥 Watch My Tutorial Notes You can customize the system prompt to adjust how AI classifies “sustainability” Works well for tracking updates relevant to climate action, green transition, and circular economy This workflow was built using n8n version 1.85.4 Submitted: April 24, 2025
by PollupAI
This n8n workflow automates the import of your Google Keep notes into a structured Google Sheet, using Google Drive, OpenAI for AI-powered processing, and JSON file extraction. It's perfect for users who want to turn exported Keep notes into a searchable, filterable spreadsheet – optionally enhanced by AI summarization or transformation. Who is this for? Researchers, knowledge workers, and digital minimalists who rely on Google Keep and want to better organize or analyze their notes. Anyone who regularly exports Google Keep notes and wants a clean, automated workflow to store them in Google Sheets. Users looking to apply AI to process, summarize, or extract insights from raw notes. What problem is this workflow solving? Exporting Google Keep notes via Google Takeout gives you unstructured .json files that are hard to read and manage. This workflow solves that by: Filtering relevant .json files Extracting note content (Optionally) applying AI to analyze or summarize each note Writing the result into a structured Google Sheet What this workflow does Google Drive Search: Looks for .json files inside a specified "Keep" folder. Loop: Processes files in batches of 10. File Filtering: Filters by .json extension. Download + Extract: Downloads each file and extracts note content from JSON. Optional Filtering: Only keeps non-archived notes or those meeting content criteria. AI Processing (optional): Uses OpenAI to summarize or transform the note content. Prepare for Export: Maps note fields to be written. Google Sheets: Appends or updates the target sheet with the note data. Setup Export your Google Keep notes using Google Takeout: Deselect all, then choose only Google Keep. Choose “Send download link via email”. Unzip the downloaded archive and upload the .json files to your Google Drive. Connect Google Drive, OpenAI, and Google Sheets in n8n. Set the correct folder path for your notes in the “Search in ‘Keep’ folder” node. Point the Google Sheet node to your spreadsheet How to customize this workflow to your needs Skip AI processing: If you don't need summaries or transformations, remove or disable the OpenAI Chat Model node. Filter criteria: Customize the Filter node to extract only recent notes, or those containing specific keywords. AI prompts: Edit the Tools Agent or Chat Model node to instruct the AI to summarize, extract tasks, categorize notes, etc. Field mapping: Adjust the “Set fields for export” node to control what gets written to the spreadsheet. Use this template to build a powerful knowledge extraction tool from your Google Keep archive – ideal for backups, audits, or data-driven insights.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Peter Zendzian
This n8n template demonstrates how to automate comprehensive web research using multiple AI models to find, analyze, and extract insights from authoritative sources. Use cases are many: Try automating competitive analysis research, finding latest regulatory guidance from official sources, gathering authoritative content for reports, or conducting market research on industry developments! Good to know Each research query typically costs $0.08-$0.34 depending on the number of sources found and processed. The workflow includes smart filtering to minimize unnecessary API calls. The workflow requires multiple AI services and may need additional setup time compared to simpler templates. Qdrant storage is optional and can be removed without affecting performance. How it works Your research question gets transformed into optimized Google search queries that target authoritative sources while filtering out low-quality sites. Apify's RAG Web Browser scrapes the content and converts pages to clean markdown format. Claude Sonnet 4 evaluates each article for relevance and quality before full processing. Articles that pass the filter get analyzed in parallel - one pipeline creates focused summaries while another extracts specific claims and evidence. GPT-4.1 Mini ranks all findings and presents the top 3 most valuable insights and summaries. All processed content gets stored in your Qdrant vector database to prevent duplicate processing and enable future reference. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, form submissions, or scheduled research. You can modify the configuration variables in the Set Node to customize Qdrant URLs, collection names, and quality thresholds for your specific needs. Requirements OpenAI API account for GPT-4.1 Mini (query optimization, summarization, ranking) Anthropic API account for Claude Sonnet 4 (content filtering) Apify account for web scraping capabilities Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Customizing this workflow Web research automation can be adapted for many specialized use cases. Try focusing on specific domains like legal research (targeting .gov and .edu sites), medical research (PubMed and health authorities), or financial analysis (SEC filings and analyst reports).