by Usama Rehman
Advanced Gmail AI Auto-Responder with Context Intelligence The next-generation email automation that knows your communication style, remembers conversations, and responds with human-like intelligence. 🚀 What Makes This Advanced? Unlike basic AI email responders, this workflow creates contextually intelligent responses by: 📄 Reading your communication profile from Google Drive 🧠 Remembering full conversation history with vector embeddings 🎯 Understanding context from previous emails in the thread 🤖 Using AI agents instead of simple prompt-response patterns 💾 Building memory of your communication style and preferences The Result: Responses that sound authentically like you, with perfect context awareness. ⏱️ Time & Impact Setup Time: 45 minutes Time Saved: 2-3 hours daily Skill Level: Intermediate-Advanced Monthly Cost: $20-30 (OpenAI API + storage) Intelligence Level: Human-like contextual awareness 🛠️ Prerequisites & Setup Required Accounts: n8n Cloud/Self-hosted (AI features required) Gmail Account with API access Google Drive with profile document OpenAI Account (GPT-4o recommended) Required Credentials in n8n: Gmail OAuth2 API Google Drive OAuth2 API OpenAI API (with sufficient credits)
by Samir Saci
Tags: Sustainability, Web Scraping, OpenAI, Google Sheets, Newsletter, Marketing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. I use this workflow to bring awareness about sustainability and promote my business by delivering automated daily news digests. > Promote your business with a fully automated newsletter powered by AI! This n8n workflow scrapes articles from the official EU news website and sends a daily curated digest, highlighting only the most relevant sustainability news. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Business owners** who want to promote their service or products with a fully automated newsletter Sustainability professionals** staying informed on EU climate news Consultants and analysts** working on CSRD, Green Deal, or ESG initiatives Corporate communications teams** tracking relevant EU activity Media curators** building newsletters What does it do? This n8n workflow: ⏰ Triggers automatically every morning 🌍 Scrapes articles from the EU Commission News Portal 🧠 Uses OpenAI GPT-4o to classify each article for sustainability relevance 📄 Stores the results in a Google Sheet for tracking 🧾 Generates a beautiful HTML digest email, including titles, summaries, and images 📬 Sends the digest via Gmail to your mailing list How it works Trigger at 08:30 every morning Scrape and extract article blocks from the EU news site Use OpenAI to decide if articles are sustainability-related Store relevant entries in Google Sheets Generate HTML email with a professional layout and logo Send the digest via Gmail to a configured recipient list What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 or GPT-4o access A Gmail OAuth credential setup Follow the Guide! Follow the sticky notes inside the workflow or check out my step-by-step tutorial on how to configure and deploy it. 🎥 Watch My Tutorial Notes You can customize the system prompt to adjust how AI classifies “sustainability” Works well for tracking updates relevant to climate action, green transition, and circular economy This workflow was built using n8n version 1.85.4 Submitted: April 24, 2025
by Mohammad Ghaffarifar
This template creates a Telegram AI Assistant that answers questions based on your documents, powered by Google Gemini and Supabase. Key features include Intelligent HTML Post-processing for rich formatting in Telegram and Adaptive Message Chunking to handle long text responses. 📹 Watch the Bot in Action ▶️ Click the image above to watch a live demo on YouTube. This video provides a live demonstration of the bot's core features and how it interacts. See a quick walkthrough of its capabilities and user flow. How it works: User uploads a PDF document to a Telegram bot. The workflow processes the PDF, creates embeddings using Google Gemini, and stores these embeddings in a Supabase vector table. Users then ask questions to the bot. The workflow performs a vector search in Supabase to find relevant document chunks based on the user's query. Google Gemini uses the retrieved relevant chunks to generate an intelligent answer. The bot sends the formatted answer back to the user on Telegram, utilizing HTML markup for enhanced presentation. Set up steps: Setup should take approximately 15-20 minutes. Import the workflow into your n8n instance. Configure credentials for Telegram, Google Gemini, and Supabase. Set up your Supabase vector table using the provided SQL script. Activate the workflow. Detailed setup instructions, including how to get API keys and configure nodes, are available in the sticky notes within the workflow itself.
by Ranjan Dailata
Who this is for The Google Trend Data Extract & Summarization workflow is ideal for trend researchers, digital marketers, content strategists, and AI developers who want to automate the extraction, summarization, and distribution of Google Trends data. This end-to-end solution helps transform trend signals into human-readable insights and delivers them across multiple channels. It is built for: Market Researchers** - Tracking trends by topic or region Content Strategists** - Identifying content opportunities from trending data SEO Analysts** - Monitoring search volume and shifts in keyword popularity Growth Hackers** - Reacting quickly to real-time search behavior AI & Automation Engineers** - Creating automated trend monitoring systems What problem is this workflow solving? Google Trends data can provide rich insights into user interests, but the raw data is not always structured or easily interpretable at scale. Manually extracting, cleaning, and summarizing trends from multiple regions or categories is time-consuming. This workflow solves the following problems: Automates the conversion of markdown or scraped HTML into clean textual input Transforms unstructured data into structured format ready for processing Uses AI summarization to generate easy-to-read insights from Google Trends Distributes summaries via email and webhook notifications Persists responses to disk for archiving, auditing, or future analytics What this workflow does Receives input: Sets an URL for the data extraction and analysis. Uses Bright Data’s Web Unlocker to extract content from relevant site. Markdown to Textual Data Extractor: Converts markdown content into plaintext using n8n’s Function or Markdown nodes Structured Data Extract: Parses the plaintext into structured JSON suitable for AI processing Summarize Google Trends: Sends structured data to Google Gemini with a summarization prompt to extract key takeaways Send Summary via Gmail: Composes an email with the AI-generated summary and sends it to a designated recipient Persist to Disk: Writes the AI structured data to disk Webhook Notification: Sends the summarized response to an external system (e.g., Slack, Notion, Zapier) using a webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source : Update the workflow input to read from Google Sheet or Airbase etc. Gemini Prompt Tuning : Customize prompts to extract summaries like: Summarize the most significant trend shifts Generate content ideas from the trending search topics Email Personalization : Configure Gmail node to: Use dynamic subject lines like: Weekly Google Trends Summary – {{date}} Send to multiple stakeholders or mailing lists File Storage Customization : Save with timestamps, e.g., trends_summary_2025-04-29.json Extend to S3 or cloud drive integrations Webhook Use Cases : Send summary to: Internal dashboards Slack channels Automation tools like Make, Zapier etc.
by Immanuel
Automated Raw Materials Inventory Management with Google Sheets, Supabase, and Gmail using n8n Webhooks Description What Problem Does This Solve? 🛠️ This workflow automates raw materials inventory management for businesses, eliminating manual stock updates, delayed material issue approvals, and missed low stock alerts. It ensures real-time stock tracking, streamlined approvals, and timely notifications. Target audience: Small to medium-sized businesses, inventory managers, and n8n users familiar with Google Sheets, Supabase, and Gmail integrations. What Does It Do? 🌟 Receives raw material data and issue requests via form submissions. Updates stock levels in Google Sheets and Supabase. Manages approvals for material issue requests with email notifications. Detects low stock levels and sends alerts via Gmail. Maintains data consistency across Google Sheets and Supabase. Key Features Real-time stock updates from form submissions. Automated approval process for material issuance. Low stock detection with Gmail notifications. Dual storage in Google Sheets and Supabase for redundancy. Error handling for robust data validation. Setup Instructions Prerequisites n8n Instance**: Self-hosted or cloud n8n instance. API Credentials**: Google Sheets API: Credentials from Google Cloud Console with Sheets scope, stored in n8n credentials. Supabase API: API key and URL from Supabase project, stored in n8n credentials (do not hardcode in nodes). Gmail API: Credentials from Google Cloud Console with Gmail scope. Forms**: A form (e.g., Google Form) to submit raw material receipts and issue requests, configured to send data to n8n webhooks. Installation Steps Import the Workflow: Copy the workflow JSON from the “Template Code” section (to be provided). Import it into n8n via “Import from File” or “Import from URL”. Configure Credentials: Add API credentials in n8n’s Credentials section for Google Sheets, Supabase, and Gmail. Assign credentials to respective nodes. For example: In the Append Raw Materials node, use Google Sheets credentials: {{ $credentials.GoogleSheets }}. In the Current Stock Update node, use Supabase credentials: {{ $credentials.Supabase }}. In the Send Low Stock Email Alert node, use Gmail credentials. Set Up Nodes: Webhook Nodes (Receive Raw Materials Webhook, Receive Material Issue Webhook): Configure webhook URLs and link them to your form submissions. Approval Email (Send Approval Request): Customize the HTML email template if needed. Low Stock Alerts (Send Low Stock Email Alert, Send Low Stock Email After Issue): Configure recipient email addresses. Test the Workflow: Submit a test form for raw material receipt and verify stock updates in Google Sheets/Supabase. Submit a material issue request, approve/reject it, and confirm stock updates and notifications. How It Works High-Level Steps Receive Raw Materials: Processes form submissions for raw material receipts. Update Stock: Updates stock levels in Google Sheets and Supabase. Handle Issue Requests: Processes material issue requests via forms. Manage Approvals: Sends approval requests and processes decisions. Monitor Stock Levels: Detects low stock and sends Gmail alerts. Detailed Descriptions Detailed node descriptions are available in the sticky notes within the workflow screenshot (to be provided). Below is a summary of key actions. Node Names and Actions Raw Materials Receiving and Stock Update Receive Raw Materials Webhook**: Receives raw material data from a form submission. Standardize Raw Material Data**: Maps form data into a consistent format. Calculate Total Price**: Computes Total Price (Quantity Received * Unit Price). Append Raw Materials**: Records receipt in Google Sheets. Check Quantity Received Validity**: Ensures Quantity Received is valid. Lookup Existing Stock**: Retrieves current stock for the Product ID. Check If Product Exists**: Branches based on Product ID existence. Calculate Updated Current Stock**: Adds Quantity Received to stock (True branch). Update Current Stock**: Updates stock in Google Sheets (True branch). Retrieve Updated Stock for Check**: Retrieves updated stock for low stock check. Detect Low Stock Level**: Flags if stock is below minimum. Trigger Low Stock Alert**: Triggers email if stock is low. Send Low Stock Email Alert**: Sends low stock alert via Gmail. Add New Product to Stock**: Adds new product to stock (False branch). Current Stock Update**: Updates Supabase Current Stock table. New Row Current Stock**: Inserts new product into Supabase. Search Current Stock**: Retrieves Supabase stock records. New Record Raw**: Inserts raw material record into Supabase. Format Response**: Removes duplicates from Supabase response. Combine Stock Update Branches**: Merges branches for existing/new products. Material Issue Request and Approval Receive Material Issue Webhook**: Receives issue request from a form submission. Standardize Data**: Normalizes request data and adds Approval Link. Validate Issue Request Data**: Ensures Quantity Requested is valid. Verify Requested Quantity**: Validates Product ID and Submission ID. Append Material Request**: Records request in Google Sheets. Check Available Stock for Issue**: Retrieves current stock for the request. Prepare Approval**: Checks stock sufficiency for the request. Send Approval Request**: Emails approver with Approve/Reject options. Receive Approval Response**: Captures approver’s decision via webhook. Format Approval Response**: Processes approval data with Approval Date. Verify Approval Data**: Validates the approval response. Retrieve Issue Request Details**: Retrieves original request from Google Sheets. Process Approval Decision**: Branches based on approval action. Get Stock for Issue Update**: Retrieves stock before update (Approved). Deduct Issued Stock**: Reduces stock by Approved Quantity (Approved). Update Stock After Issue**: Updates stock in Google Sheets (Approved). Retrieve Stock After Issue**: Retrieves updated stock for low stock check. Detect Low Stock After Issue**: Flags low stock after issuance. Trigger Low Stock Alert After Issue**: Triggers email if stock is low. Send Low Stock Email After Issue**: Sends low stock alert via Gmail. Update Issue Request Status**: Updates request status (Approved/Rejected). Combine Stock Lookup Results**: Merges stock lookup branches. Create Record Issue**: Inserts issue request into Supabase. Search Stock by Product ID**: Retrieves Supabase stock records. Issues Table Update**: Updates Supabase Materials Issued table. Update Current Stock**: Updates Supabase stock after issuance. Combine Issue Lookup Branches**: Merges issue lookup branches. Search Issue by Submission ID**: Retrieves Supabase issue records. Customization Tips Expand Storage Options **: Add nodes to store data in other databases (e.g., Airtable) alongside Google Sheets and Supabase. Modify Approval Email **: Update the Send Approval Request node to customize the HTML email template (e.g., adjust styling or add branding). Alternative Notifications **: Add nodes to send low stock alerts via other platforms (e.g., Slack or Telegram). Adjust Low Stock Threshold **: Modify the Detect Low Stock Level node to change the Minimum Stock Level (default: 50).!
by Gofive
Template: Create an AI Knowledge Base Chatbot with Google Drive and OpenAI GPT (Venio/Salesbear) 📋 Template Overview This comprehensive n8n workflow template creates an intelligent AI chatbot that automatically transforms your Google Drive documents into a searchable knowledge base. The chatbot uses OpenAI's GPT models to provide accurate, context-aware responses based exclusively on your uploaded documents, making it perfect for customer support, internal documentation, and knowledge management systems. 🎯 What This Template Does Automated Knowledge Processing Real-time Document Monitoring**: Automatically detects when files are added or updated in your designated Google Drive folder Intelligent Document Processing**: Converts PDFs, text files, and other documents into searchable vector embeddings Smart Text Chunking**: Breaks down large documents into optimally-sized chunks for better AI comprehension Vector Storage**: Creates a searchable knowledge base that the AI can query for relevant information AI-Powered Chat Interface Webhook Integration**: Receives questions via HTTP requests from any external platform (Venio/Salesbear) Contextual Responses**: Maintains conversation history for natural, flowing interactions Source-Grounded Answers**: Provides responses based strictly on your document content, preventing hallucinations Multi-platform Support**: Works with any chat platform that can send HTTP requests 🔧 Pre-conditions and Requirements Required API Accounts and Permissions 1. Google Drive API Access Google Cloud Platform account Google Drive API enabled OAuth2 credentials configured Read access to your target Google Drive folder 2. OpenAI API Account Active OpenAI account with API access Sufficient API credits for embeddings and chat completions API key with appropriate permissions 3. n8n Instance n8n cloud account or self-hosted instance Webhook functionality enabled Ability to install community nodes (LangChain nodes) 4. Target Chat Platform (Optional) API credentials for your chosen chat platform Webhook capability or API endpoints for message sending Required Permissions Google Drive**: Read access to folder contents and file downloads OpenAI**: API access for text-embedding-ada-002 and gpt-4o-mini models External Platform**: API access for sending/receiving messages (if integrating with existing chat systems) 🚀 Detailed Workflow Operation Phase 1: Knowledge Base Creation File Monitoring: Two trigger nodes continuously monitor your Google Drive folder for new files or updates Document Discovery: When changes are detected, the workflow searches for and identifies the modified files Content Extraction: Downloads the actual file content from Google Drive Text Processing: Uses LangChain's document loader to extract text from various file formats Intelligent Chunking: Splits documents into overlapping chunks (configurable size) for optimal AI processing Vector Generation: Creates embeddings using OpenAI's text-embedding-ada-002 model Storage: Stores vectors in an in-memory vector store for instant retrieval Phase 2: Chat Interaction Question Reception: Webhook receives user questions in JSON format Data Extraction: Parses incoming data to extract chat content and session information AI Processing: AI Agent analyzes the question and determines relevant context Knowledge Retrieval: Searches the vector store for the most relevant document sections Response Generation: OpenAI generates responses based on found content and conversation history Authentication: Validates the request using token-based authentication Response Delivery: Sends the answer back to the originating platform 📚 Usage Instructions After Setup Adding Documents to Your Knowledge Base Upload Files: Simply drag and drop documents into your configured Google Drive folder Supported Formats: PDFs, TXT, DOC, DOCX, and other text-based formats Automatic Processing: The workflow will automatically detect and process new files within minutes Updates: Modify existing files, and the knowledge base will automatically update Integrating with Your Chat Platform Webhook URL: Use the generated webhook URL to send questions POST https://your-n8n-domain/webhook/your-custom-path Content-Type: application/json { "body": { "Data": { "ChatMessage": { "Content": "What are your business hours?", "RoomId": "user-123-session", "Platform": "web", "User": { "CompanyId": "company-456" } } } } } Response Format: The chatbot returns structured responses that your platform can display Testing Your Chatbot Initial Test: Send a simple question about content you know exists in your documents Context Testing: Ask follow-up questions to test conversation memory Edge Cases: Try questions about topics not in your documents to verify appropriate responses Performance: Monitor response times and accuracy 🎨 Customization Options System Message Customization Modify the AI Agent's system message to match your brand and use case: You are a [YOUR_BRAND] customer support specialist. You provide helpful, accurate information based on our documentation. Always maintain a [TONE] tone and [SPECIFIC_GUIDELINES]. Response Behavior Customization Tone and Voice**: Adjust from professional to casual, formal to friendly Response Length**: Configure for brief answers or detailed explanations Fallback Messages**: Customize what the bot says when it can't find relevant information Language Support**: Adapt for different languages or technical terminologies Technical Configuration Options Document Processing Chunk Size**: Adjust from 1000 to 4000 characters based on your document complexity Overlap**: Modify overlap percentage for better context preservation File Types**: Add support for additional document formats AI Model Configuration Model Selection**: Switch between gpt-4o-mini (cost-effective) and gpt-4 (higher quality) Temperature**: Adjust creativity vs. factual accuracy (0.0 to 1.0) Max Tokens**: Control response length limits Memory and Context Conversation Window**: Adjust how many previous messages to remember Session Management**: Configure session timeout and user identification Context Retrieval**: Tune how many document chunks to consider per query Integration Customization Authentication Methods Token-based**: Default implementation with bearer tokens API Key**: Simple API key validation OAuth**: Full OAuth2 implementation for secure access Custom Headers**: Validate specific headers or signatures Response Formatting JSON Structure**: Customize response format for your platform Markdown Support**: Enable rich text formatting in responses Error Handling**: Define custom error messages and codes 🎯 Specific Use Case Examples Customer Support Chatbot Scenario: E-commerce company with product documentation, return policies, and FAQ documents Setup: Upload product manuals, policy documents, and common questions to Google Drive Customization: Professional tone, concise answers, escalation triggers for complex issues Integration: Website chat widget, mobile app, or customer portal Internal HR Knowledge Base Scenario: Company HR department with employee handbook, policies, and procedures Setup: Upload HR policies, benefits information, and procedural documents Customization: Friendly but professional tone, detailed policy explanations Integration: Internal Slack bot, employee portal, or HR ticketing system Technical Documentation Assistant Scenario: Software company with API documentation, user guides, and troubleshooting docs Setup: Upload API docs, user manuals, and technical specifications Customization: Technical tone, code examples, step-by-step instructions Integration: Developer portal, support ticket system, or documentation website Educational Content Helper Scenario: Educational institution with course materials, policies, and student resources Setup: Upload syllabi, course content, academic policies, and student guides Customization: Helpful and encouraging tone, detailed explanations Integration: Learning management system, student portal, or mobile app Healthcare Information Assistant Scenario: Medical practice with patient information, procedures, and policy documents Setup: Upload patient guidelines, procedure explanations, and practice policies Customization: Compassionate tone, clear medical explanations, disclaimer messaging Integration: Patient portal, appointment system, or mobile health app 🔧 Advanced Customization Examples Multi-Language Support // In Edit Fields node, detect language and route accordingly const language = $json.body.Data.ChatMessage.Language || 'en'; const systemMessage = { 'en': 'You are a helpful customer support assistant...', 'es': 'Eres un asistente de soporte al cliente útil...', 'fr': 'Vous êtes un assistant de support client utile...' }; Department-Specific Routing // Route questions to different knowledge bases based on department const department = $json.body.Data.ChatMessage.Department; const vectorStoreKey = vector_store_${department}; Advanced Analytics Integration // Track conversation metrics const analytics = { userId: $json.body.Data.ChatMessage.User.Id, timestamp: new Date().toISOString(), question: $json.body.Data.ChatMessage.Content, response: $json.response, responseTime: $json.processingTime }; 📊 Performance Optimization Tips Document Management Optimal File Size**: Keep documents under 10MB for faster processing Clear Structure**: Use headers and sections for better chunking Regular Updates**: Remove outdated documents to maintain accuracy Logical Organization**: Group related documents in subfolders Response Quality System Message Refinement**: Regularly update based on user feedback Context Tuning**: Adjust chunk size and overlap for your specific content Testing Framework**: Implement systematic testing for response accuracy User Feedback Loop**: Collect and analyze user satisfaction data Cost Management Model Selection**: Use gpt-4o-mini for cost-effective responses Caching Strategy**: Implement response caching for frequently asked questions Usage Monitoring**: Track API usage and set up alerts Batch Processing**: Process multiple documents efficiently 🛡️ Security and Compliance Data Protection Document Security**: Ensure sensitive documents are properly secured Access Control**: Implement proper authentication and authorization Data Retention**: Configure appropriate data retention policies Audit Logging**: Track all interactions for compliance Privacy Considerations User Data**: Minimize collection and storage of personal information Session Management**: Implement secure session handling Compliance**: Ensure adherence to relevant privacy regulations Encryption**: Use HTTPS for all communications 🚀 Deployment and Scaling Production Readiness Environment Variables**: Use environment variables for sensitive configurations Error Handling**: Implement comprehensive error handling and logging Monitoring**: Set up monitoring for workflow health and performance Backup Strategy**: Ensure document and configuration backups Scaling Considerations Load Testing**: Test with expected user volumes Rate Limiting**: Implement appropriate rate limiting Database Scaling**: Consider external vector database for large-scale deployments Multi-Instance**: Configure for multiple n8n instances if needed 📈 Success Metrics and KPIs Quantitative Metrics Response Accuracy**: Percentage of correct answers Response Time**: Average time from question to answer User Satisfaction**: Rating scores and feedback Usage Volume**: Questions per day/week/month Cost Efficiency**: Cost per interaction Qualitative Metrics User Feedback**: Qualitative feedback on response quality Use Case Coverage**: Percentage of user needs addressed Knowledge Gaps**: Identification of missing information Conversation Quality**: Natural flow and context understanding
by Tony Paul
How it works ++Download the google sheet here++ and replace this with the googles sheet node: Google sheet , upload to google sheets and replace in the google sheets node. Scheduled trigger: Runs once a day at 8 AM (server time). Fetch product list: Reads your “master” sheet (product_url + last known price) from Google Sheets. Loop with delay: Iterates over each row (product) one at a time, inserting a short pause (20 s) between HTTP requests to avoid blocking. Scrape current price: Loads each product_url, extracts the current price via a simple CSS selector. Compare & normalize: Compares the newly scraped price against the “last_price” from your sheet, calculates percentage change, and tags items where price_changed == true. On price change: Send alert: Formats a Telegram message (“Price Drop” or “Price Hike”) and pushes it to your configured chat. Log history: Appends a new row to a separate “price_tracking” tab with timestamp, old price, new price, and % change. Update master sheet: After a 1 min pause, writes the updated current_price back to your “master” sheet so future runs use it as the new baseline. Set up step Google Sheets credentials (~5 min) Create a Google Sheets OAuth credential in n8n. Copy your sheet’s ID and ensure you have two tabs: product_data (columns: product_url, price) price_tracking (columns: timestamp, product_url, last_price, current_price, price_diff_pct, price_changed) Paste the sheet ID into both Google Sheets nodes (“Read” and “Append/Update”). Telegram credentials (~5 min) Create a Telegram Bot token via BotFather. Copy your chat_id (for your target group or personal chat). Add those credentials to n8n and drop them into the “Telegram” node. Workflow parameters (~5 min) Verify the schedule in the Schedule Trigger node is set to 08:00 (or adjust to your preferred run time). In the Loop Over Items node, confirm “Batch Size” is 1 (to process one URL at a time). Adjust the Delay to avoid Request Blocking node if your site requires a longer pause (default is 20 s). In the Parse Data From The HTML Page node, double-check the CSS selector matches how prices appear on your target site. Once credentials are in place and your sheet tabs match the expected column names, the flow should be ready to activate. Total setup time is under 15 minutes—detailed notes are embedded as sticky comments throughout the workflow to help you tweak selectors, change timeouts, or adjust sheet names without digging into code.
by lin@davoy.tech
The YogiAI workflow automates sending daily yoga pose reminders and related information via Line Push Messages . This automation leverages data from a Google Sheets database containing yoga pose details such as names, image URLs, and links to ensure users receive personalized and engaging content every day. Purpose Provide users with daily yoga pose suggestions tailored to their practice. Deliver visually appealing and informative content through Line's Flex Messages, including images and clickable links. Log user interactions and preferences back into Google Sheets to refine future recommendations. Key Features Automated Daily Reminders : Sends a curated list of yoga poses at a scheduled time (21:30 Bangkok time). Dynamic Content Generation : Uses AI to rewrite and format messages in a user-friendly manner, complete with emojis and clear instructions. Integration with Google Sheets : Pulls data from a predefined Google Sheet and logs interactions for continuous improvement. Customizable Messaging : Ensures JSON outputs are properly formatted for Line’s Flex Message API, allowing for interactive and visually rich content. Data Source Google Sheets Structure The workflow relies on a Google Sheet structured as follows: PoseName : The name of the yoga pose. uri : The image URL representing the pose. url : A clickable link directing users to more information about the pose. Sample Data Layout Supine Angle https://example.com/SupineAngle-tn146.png https://example.com/pose/SupineAngle Warrior II https://example.com/WarriorII-tn146.png https://example.com/pose/WarriorII *Note : Ensure that you update the Google Sheet with your own data. Refer to this sample sheet for reference. * Scheduled Trigger The workflow is triggered daily at 21:30 (9:30 PM) Bangkok Time (Asia/Bangkok) . This ensures timely delivery of reminders to users, keeping them engaged with their yoga practice. Workflow Process Data Retrieval Node: Get PoseName Fetches yoga pose details from the specified range in the Google Sheet. Content Generation Node: WritePosesToday Utilizes Azure OpenAI to craft user-friendly text, complete with emojis and clear instructions. Node: RewritePosesToday Formats the AI-generated text specifically for Line messaging, ensuring compatibility and visual appeal. JSON Formatting Node: WriteJSONflex Generates JSON structures required for Line’s Flex Messages, enabling carousel displays of yoga pose images and links. Node: Fix JSON Ensures all JSON outputs are correctly formatted before being sent via Line. Message Delivery Node: Line Push with Flex Bubble Sends the final message, including both text and Flex Message carousels, directly to users via Line Push Messages. Logging Interactions Nodes: YogaLog & YogaLog2 Logs each interaction back into Google Sheets to track which poses were sent and how often they appear, refining future recommendations. Setup Prerequisites Google Sheets Account : Set up a Google Sheet with the required structure and populate it with your yoga pose data. Line Developer Account : Create a Line channel to obtain necessary credentials for sending push messages. Azure OpenAI Account : Configure access to Azure OpenAI services for generating and formatting content. Intended Audience This workflow is ideal for: Yoga Instructors : Seeking to engage students with daily pose suggestions. Fitness Enthusiasts : Looking to maintain consistency in their yoga practice. Content Creators : Interested in automating personalized and visually appealing content distribution.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Tharwat Mohamed
🚀 AI Resume Screener (n8n Workflow Template) An AI-powered resume screening system that automatically evaluates applicants from a simple web form and gives you clear, job-specific scoring — no manual filtering needed. ⚡ What the workflow does 📄 Accepts CV uploads via a web form (PDF) 🧠 Extracts key info using AI (education, skills, job history, city, birthdate, phone) 🎯 Dynamically matches the candidate to job role criteria stored in Google Sheets 📝 Generates an HR-style evaluation and a numeric score (1–10) 📥 Saves the result in a Google Sheet and uploads the original CV to Google Drive 💡 Why you’ll love it FeatureBenefitAI scoringInstantly ranks candidate fit without reading every CVGoogle Sheet-drivenEasily update job profiles — no code changesFast setupConnect your accounts and you're live in ~15 minsScalableWorks for any department, team, or organizationDeveloper-friendlyExtend with Slack alerts, translations, or automations 🧰 Requirements 🔑 OpenAI or Google Gemini API Key 📄 Google Sheet with 2 columns: Role, Profile Wanted ☁️ Google Drive account 🌐 n8n account (self-hosted or cloud) 🛠 Setup in 5 Steps Import the workflow into n8n Connect Google Sheets, Drive, and OpenAI or Gemini Add your job roles and descriptions in Google Sheets Publish the form and test with a sample CV Watch candidate profiles and scores populate automatically 🤝 Want help setting it up? Includes free setup guidance by the creator — available by email or WhatsApp after purchase. I’m happy to assist you in customizing or deploying this workflow for your team. 📧 Email: tharwat.elsayed2000@gmail.com 💬 WhatsApp: +20106 180 3236
by Keith Rumjahn
Who's this for? Anyone who wants to improve the SEO of their website Umami users who want insights on how to improve their site SEO managers who need to generate reports weekly Case study Watch youtube tutorial here Get my SEO A.I. agent system here You can read more about how this works here. How it works This workflow calls the Umami API to get data Then it sends the data to A.I. for analysis It saves the data and analysis to Baserow How to use this Input your Umami credentials Input your website property ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Date, Summary, Top Pages, Blog (name of your blog). Future development Use this as a template. There's alot more Umami stats you can pull from the API. Change the A.I. prompt to give even more detailed analysis. Created by Rumjahn
by Jorge Martínez
Automate tweet engagement on X (formerly Twitter) Description Automate professional engagement on X (formerly Twitter) by searching for, filtering, liking, and replying to tweets that match your key topics. This workflow enables you to engage consistently and efficiently with relevant conversations, using your defined professional role and the power of GPT for filtering and replies. Save time and maintain high-quality interactions, while staying focused on your business or personal brand interests. How it Works Rotating Topic Selection The workflow selects one search term from your list on each run, using a rotating index based on the date. Search Tweets & Extract Essentials Searches X (formerly Twitter) for tweets matching the chosen topic, then extracts only the tweet id and text for further processing. GPT‑Based Filtering with Role Context Filters tweets based on your role and strict criteria, removing non-English tweets, memes, spam, Grok-generated content, political posts, internships, and more. Engagement Loop For every filtered tweet, the workflow likes the post, generates a professional, concise reply with GPT (matching language and context), and posts the reply. Wait nodes ensure compliance with Twitter’s API rate limits (can be adjusted for paid API tiers). Requirements X (Twitter) API credentials (for searching, liking, and replying to tweets) OpenAI API key (for GPT-based steps) Setup Steps Obtain your X (Twitter) API credentials. Obtain your OpenAI API key. Configure the schedule in the trigger node to your desired frequency (e.g., every 3 days or daily). Set your list of topics and professional role in the variables node. How to Customize the Workflow (Optional) Adjust prompts** in the GPT nodes to fine-tune filtering and reply style. Upgrade your Twitter API plan** to increase request limits and search for more tweets per run. Change tweet processing logic:** For high-volume engagement (e.g., analyzing 100+ tweets per run), consider switching to a per-tweet loop for advanced filtering and response handling. This workflow enables scalable, professional, and targeted engagement on X (formerly Twitter), fully customizable to your audience and objectives.