by Dvir Sharon
🛒 Monitor Google Shopping Prices with Bright Data & Email Alerts This template requires a self-hosted n8n instance to run. A comprehensive n8n automation that monitors product prices daily using Bright Data's Google Shopping dataset and sends smart email alerts when price conditions are met. 📋 Overview This workflow provides an automated price monitoring solution that tracks product prices from Google Shopping daily and sends intelligent email notifications. Perfect for e-commerce monitoring, competitor analysis, deal hunting, and inventory management. ✨ Key Features 🕘 Scheduled Monitoring: Daily automated price checks at 9 AM 🛍️ Google Shopping Integration: Uses Bright Data's dataset for accurate pricing 📊 Smart Price Comparison: Compares current prices with historical data 📧 Intelligent Alerts: Sends emails only when prices meet criteria 📈 Data Storage: Updates Google Sheets with latest pricing data 🔄 Batch Processing: Handles multiple products with rate limiting ⚡ Fast & Reliable: Built-in error handling 🎯 Customizable Filters: Advanced price comparison logic 🎯 What This Workflow Does Schedule Trigger: Runs daily at 9 AM Data Retrieval: Fetches product list from Google Sheets Price Extraction: Scrapes current prices using Bright Data Data Update: Updates Google Sheets with new prices Price Comparison: Compares new vs. old prices Smart Filtering: Filters products that meet alert criteria Email Notifications: Sends alerts for qualifying changes Rate Limiting: Adds delay between emails Output Data Points | Field | Description | Example | | :------------ | :------------------------- | :------------------------------- | | Product URL | Original Google Shopping URL | https://shopping.google.com/product/... | | Product Name | Product title | iPhone 15 Pro Max 256GB | | Ratings | Product rating score | 4.5 | | Reviews | Number of reviews | 1,247 | | Old Price | Previous price | $1,199.00 | | New Price | Current scraped price | $1,199.00 | | Timestamp | When the check occurred | 2025-05-30T09:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with Google Shopping dataset access Gmail account for notifications Steps Import the workflow JSON into n8n Configure Bright Data credentials and dataset access Set up Google Sheets with required columns Configure Gmail OAuth2 credentials Update sheet IDs and schedule settings Test with sample products and activate 📖 Usage Guide Google Sheet Structure Your Google Sheet should have the following columns to ensure the workflow functions correctly: Product URL** (Text): The direct URL to the Google Shopping product page. This is the primary identifier for the product. Product Name** (Text): The name of the product. This will be automatically populated or updated by the workflow. Old Price** (Number/Currency): The price of the product from the previous check. This column is crucial for price comparison. New Price** (Number/Currency): The most recently scraped price of the product. Ratings** (Number): The star rating of the product. Reviews** (Number): The total number of reviews for the product. Timestamp** (Datetime): The date and time when the price check was performed. Adding Products Add Google Shopping URLs to your Google Sheet. The workflow will fetch product details and track prices. Historical price data builds over time. Understanding Price Alerts The default setting for this workflow is to send an email alert when the new price equals the old price. This might seem counterintuitive, but it's useful for specific scenarios, such as: Monitoring stable pricing:** If you are tracking a product and want to be notified when its price has remained consistent over time, indicating a potential stable buying opportunity or a benchmark. Verifying data consistency:** To confirm that the scraping process is working correctly and consistently retrieving the same price when no changes are expected. You can easily customize the alert logic to trigger on different conditions as described below. Customizing Alert Logic Price drops:** new_price < old_price Significant drops:** new_price < (old_price * 0.9) (e.g., price dropped by more than 10%) Price increases:** new_price > old_price Any change:** new_price != old_price Reading the Results Real-time pricing data Historical tracking Product metadata Timestamps for each check 🔧 Customization Options Add More Data:** Descriptions, availability, seller info, shipping, images Modify Email Templates:** Customize subject and body Multiple Recipients:** Duplicate email node and change recipients Webhook Integration:** Add real-time triggers or Slack alerts 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No price data extracted:** Verify URLs and test with different products Google Sheets permission denied:** Re-authenticate and check sharing Emails not sending:** Re-auth Gmail OAuth and verify recipients Filter not working:** Check price formats and logic Workflow failed:** Check logs, retry logic, and network status 📊 Use Cases & Examples E-commerce Monitoring:** Track competitor pricing and trends Deal Hunting:** Get alerts for price drops on wishlist items Inventory Management:** Monitor supplier pricing for procurement Market Research:** Analyze pricing trends and generate reports ⚙️ Advanced Configuration Batch Processing:** Increase batch size, add delays, use parallel processing Price History:** Store historical data, calculate averages, forecast trends Tool Integration:** CRM, Slack, databases, BI tools (Tableau, Power BI) 📈 Performance & Limits Single URL:** 2–5 seconds Concurrent Requests:** 3–5 (depends on Bright Data plan) Data Accuracy:** 95%+ Success Rate:** 90%+ Daily Capacity:** 100–500 products Memory:** ~100MB per execution API Calls:** 1 Bright Data + 2 Google Sheets per product 🤝 Support & Community n8n Forum:** <https://community.n8n.io> Documentation:** <https://docs.n8n.io> Bright Data Support:** Via your Bright Data dashboard GitHub Issues:** Report bugs and request features 🎯 Ready to Use! Your workflow provides a solid foundation for automated price monitoring. Customize it to fit your specific needs and use cases for maximum effectiveness in tracking Google Shopping prices with intelligent email notifications. Please note that this template uses Community Nodes. Ensure you understand the risks before using community nodes.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Davide
This Workflow simulates an AI-powered phone agent with two main functions: 📅 Appointment Booking – It can schedule appointments directly into Google Calendar. 🧠 RAG-based Information Retrieval – It provides answers using a Retrieval-Augmented Generation (RAG) system. For example, it can respond to questions such as store opening hours, return policies, or product details. The guide also explains how to purchase a dedicated phone number (with a +1 prefix) and link it to the AI agent. This setup is cost-effective, as it uses a FREE $10 credit to operate without additional charges in the beginning. ✨ Advantages 🕐 24/7 Availability** – The AI agent can answer calls and assist customers at any time. 🤖 Automation** – It reduces the workload on human staff by handling repetitive tasks like appointment scheduling and FAQ responses. 🔌 Easy Integration** – Built with n8n, it’s flexible and customizable for various platforms and tools. 💸 Low-cost Setup** – Using the free credit, businesses can get started without an upfront investment. 📦 Use Cases 🛍 E-commerce** – Answer common product questions or order inquiries. 🏬 Retail Stores** – Provide store hours, address info, and return policies. 🍽 Restaurants** – Take reservations or share menu information. 💼 Service Providers** – Book appointments or consultations. 📞 Any Local Business** – Offer phone support without needing a live operator. How It Works This Workflow simulates an AI-powered phone agent with two primary functions: Appointment Booking The workflow captures call events (e.g., call_ended or call_analyzed) and extracts key details (transcript, caller info, duration, etc.). Using OpenAI, it summarizes the conversation and parses structured data (e.g., names, contact info, dates). For scheduling, it converts user-provided dates into Google Calendar-compatible formats and creates events automatically. RAG-Based Information Retrieval When a query is received (e.g., store hours, product details), the workflow retrieves relevant information from a Qdrant vector store. An AI agent processes the query using the retrieved data and responds via a webhook, ensuring accurate, context-aware answers. Set Up Steps Prepare Qdrant Vector Store Create/refresh a Qdrant collection (via HTTP requests). Upload and vectorize documents (e.g., from Google Drive) using OpenAI embeddings. Configure RetellAI Agent Sign up for RetellAI, create an agent, and set the webhook URLs (n8n_call for call events, n8n_rag_function for RAG queries). Purchase a Twilio phone number and link it to the agent. n8n Workflow Setup Connect OpenAI, Qdrant, Google Calendar, and Telegram nodes with credentials. Customize prompts for summarization, date parsing, and RAG responses. Test the workflow to ensure data flows from call events → processing → actions (e.g., calendar bookings, Telegram alerts). Deploy Trigger the workflow via RetellAI webhooks during calls. Monitor outputs (e.g., call summaries in Telegram, calendar events). Note: Replace placeholders (e.g., QDRANTURL, COLLECTION, CHAT_ID) with actual values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Ranjan Dailata
Who is this for? This workflow automates the process of querying Bing's Copilot Search, extracting structured data from the results, summarizing the information, and sending a notification via webhook. It leverages the Microsoft Copilot to retrieve search results and integrates AI-powered tools for data extraction and summarization. What problem is this workflow solving? Data Analysts and Researchers: Who need to gather and summarize information from Bing search results efficiently. Developers and Engineers: Looking to integrate Bing search data into applications or services. Digital Marketers and SEO Specialists: Interested in monitoring search engine results for specific keywords or topics. What this workflow does Manually extracting and summarizing information from search engine results can be time-consuming and error-prone. This workflow automates the process by: Performing Bing searches using Bright Data's Bing Search API. Extracting structured data from the search results. Summarizing the extracted information using AI tools. Sending the summarized data to a specified endpoint via webhook. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perform a Bing Copilot Request node with the prompt you wish to perform the search. Update the Structured Data Webhook Notifier node with the Webhook endpoint of your choice. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Modify Search Queries: Adjust the search terms to target different topics or keywords. Change Data Extraction Logic: Customize the extraction process to capture specific data points from the search results. Alter Summarization Techniques: Integrate different AI models or adjust parameters to change how summaries are generated. Update Webhook Endpoints: Direct the summarized data to different endpoints as required. Schedule Workflow Runs: Set up automated triggers to run the workflow at desired intervals.
by Franz
🕸️ Dynamic Website Change Monitor with Smart Email Alerts Never miss important website updates again! This workflow automatically tracks changes on dynamic websites (think React apps, JavaScript-heavy sites) and sends you instant email notifications when something changes. Perfect for keeping tabs on competitors, monitoring product updates, or staying on top of important announcements. ✨ What makes this special? 🚀 Handles Dynamic Websites: Uses Firecrawl API to scrape JavaScript-rendered content that basic scrapers can't touch 📧 Smart Email Alerts: Only sends notifications when content actually changes (no spam!) 📊 Historical Tracking: Keeps a complete log of all changes in Google Sheets 🛡️ Bulletproof: Continues working even if one part fails ⚡ Ready to Deploy: Webhook-triggered, perfect for cron jobs or external schedulers 🎯 Perfect for monitoring: Competitor pricing pages Job board postings Product availability updates News sites for breaking stories API documentation changes Terms of service updates 🛠️ What you'll need to get started: API Accounts & Keys: Firecrawl Account 🔥 Sign up at firecrawl.dev Grab your API key from the dashboard Create a "Bearer Auth" credential in n8n Google Cloud Setup ☁️ Enable Google Sheets API Enable Gmail API Set up OAuth2 credentials Add both as credentials in n8n Google Sheets Document 📋 Create a new spreadsheet Add two tabs: "Log" and "comparison" Follow the structure outlined in the workflow notes 🚀 How it works: Webhook receives trigger → Starts the monitoring process Firecrawl scrapes website → Gets fresh content (even JavaScript-rendered!) Smart comparison → Checks against previously stored content Change detected? → If yes, send email + log everything Update storage → Prepares for next monitoring cycle ⚙️ Setup Steps: Import this workflow into your n8n instance Configure credentials for Firecrawl, Google Sheets, and Gmail Update the target URL in the Firecrawl node Set your email address in the Gmail node Create your Google Sheets with the required structure Test it manually first, then activate! 🎨 Customize it your way: Target any website** by updating the URL Change email templates** to match your style Adjust monitoring frequency** with external cron jobs Switch between markdown/HTML** extraction formats Fine-tune change detection** sensitivity 🔧 Troubleshooting: Firecrawl errors?** Check your API key and rate limits Google Sheets issues?** Verify OAuth permissions and sheet structure Email not sending?** Check Gmail API quotas and spam folders Webhook problems?** Make sure the workflow is activated Ready to never miss another website change? Let's get this automation running! 🎉
by David Levesque
Here's the corrected English text: Dropbox Folder Monitoring Workflow As we don't have (yet?) a Dropbox node "Watching new files" or "Watching folder", I created this central workflow to do it. How it works Triggered by Dropbox webhook I respond immediately to Dropbox to avoid webhook disabling Then I add/duplicate one branch per monitored folder, according to my needs In my case, I need to monitor several folders, like "vocal notes to process", "transcriptions to LinkedIn posts" or "quotes to add". This workflow shows 2 types of folder monitoring: Way #1: Each file in the monitored folder calls a sub-workflow Way #2: We get all files from the monitored folder and compare them to a database. If the file is not listed in DB, i supposed it's new one. Way #1 - We get all files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor. This step is here just to be homogeneous and allow setting the folder path only once in this branch. I list the folder files We keep only files (exclude folders) Then I call the specialized sub-workflow Way #2 - We want only new files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor I list the folder files and keep only files Meanwhile, I query my DB to get known files about this folder (I send the query to NocoDB (folder_to_watch,eq,{{ $json.folder_to_watch }})) Now I can exclude old files and keep only new ones by merging (I compare from Dropbox file id - as the file could be renamed by the user) I add the new file in DB to be sure to recognize it next time - I save the JSON Dropbox data: { "id":"{{ $json.id }}", "name":"{{ $json.name }}", "lastModifiedClient": "{{ $json.lastModifiedClient }}", "lastModifiedServer": "{{ $json.lastModifiedServer }}", "rev": "{{ $json.rev }}", "contentSize": {{ $json.contentSize }}, "type": "{{ $json.type }}", "contentHash": "{{ $json.contentHash }}", "pathLower": "{{ $json.pathLower }}", "pathDisplay": "{{ $json.pathDisplay }}", "isDownloadable": {{ $json.isDownloadable }} } And now I can call my sub-workflow :) My DB Columns details: folder_to_watch data (json/text) timestamp file_id (Dropbox file ID, to ease future searches) My vision: I have only one workflow in my n8n that monitors Dropbox folders/files This workflow calls the required sub-workflow specialized for the tasks required I will have as many branches as I have folders to monitor (if I have 5 different folders to watch, I will get 5 branches and 5 sub-workflows)
by WeWeb
This n8n template helps you build a full AI-powered LinkedIn content generator with just a few clicks. Paired with the free WeWeb UI template, it becomes a ready-to-use web app where users can: Add their own OpenAI API key Customize the prompt and define 6 content topics Edit the AI-generated topics Choose when to generate LinkedIn posts, complete with hashtags and an optional image Who This Is For Perfect for marketers, indie hackers, and solopreneurs who want to build their personal brand on LinkedIn while staying in control of what gets posted. 🧠 What Makes This Different Unlike most AI agents, you stay fully in control: You define the tone and focus via the prompt. You choose which topics to keep or modify. You decide when to generate a post. You can build on top of this and create your own SaaS product. It’s also modular and extendable—hook it up to your backend, add user login, or feed AI improvements based on user input. ⚙️ How It Works Triggering Events: The app includes 3 pre-configured triggers, ready to be hooked into your WeWeb frontend. Just update the webhook URLs after duplicating the n8n workflow. Topic Generation: A call is made to OpenAI (GPT-4) to generate topic ideas based on your prompt. Post Creation: Once topics are approved or edited, GPT-4 writes full posts with suggested hashtags. Image Generation (Optional): If enabled, a DALL·E call generates a relevant image. Everything Stays Local: All data and images are handled locally, no cloud storage setup needed. 🧪 Requirements & Setup No fancy infrastructure required. Here’s what helps you get started: Free WeWeb account** (recommended) to use the frontend UI template OpenAI account** with API access (for GPT-4 and DALL·E) n8n account** (self-hosted or cloud) to run the backend workflow The template is completely free to use. Since each user adds their own OpenAI API key, you don't need to worry about usage costs or rate limits on your end. 🔧 Want to Go Further? This setup is beginner-friendly, but developers can: Add user accounts Save post history Feed user feedback back into the prompt logic Launch their own branded version as a SaaS
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by David Ashby
🛠️ Demio Tool MCP Server Complete MCP server exposing all Demio Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Demio Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Demio Tool tool with full error handling 📋 Available Operations (4 total) Every possible Demio Tool operation is included: 📅 Event (3 operations) • Get an event • Get many events • Register an event 🔧 Report (1 operations) • Get a report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Demio Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Demio Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Adam Bertram
LintGuardian: Automated PR Linting with n8n & AI What It Does LintGuardian is an n8n workflow template that automates code quality enforcement for GitHub repositories. When a pull request is created, the workflow automatically analyzes the changed files, identifies linting issues, fixes them, and submits a new PR with corrections. This eliminates manual code style reviews, reduces back-and-forth comments, and lets your team focus on functionality rather than formatting. How It Works The workflow is triggered by a GitHub webhook when a PR is created. It fetches all changed files from the PR using the GitHub API, processes them through an AI-powered linting service (Google Gemini), and automatically generates fixes. The AI agent then creates a new branch with the corrected files and submits a "linting fixes" PR against the original branch. Developers can review and merge these fixes with a single click, keeping code consistently formatted with minimal effort. Prerequisites To use this template, you'll need: n8n instance: Either self-hosted or using n8n.cloud GitHub repository: Where you want to enforce linting standards GitHub Personal Access Token: With permissions for repo access (repo, workflow, admin:repo_hook) Google AI API Key: For the Gemini language model that powers the linting analysis GitHub webhook: Configured to send PR creation events to your n8n instance Setup Instructions Import the template into your n8n instance Configure credentials: Add your GitHub Personal Access Token under Credentials → GitHub API Add your Google AI API key under Credentials → Google Gemini API Update repository information: Locate the "Set Common Fields" code node at the beginning of the workflow Change the gitHubRepoName and gitHubOrgName values to match your repository const commonFields = { 'gitHubRepoName': 'your-repo-name', 'gitHubOrgName': 'your-org-name' } Configure the webhook: Create a file named .github/workflows/lint-guardian.yml in your repository replacing the Trigger n8n Workflow step with your webhook: name: Lint Guardian on: pull_request: types: [opened, synchronize] jobs: trigger-linting: runs-on: ubuntu-latest steps: name: Trigger n8n Workflow uses: fjogeleit/http-request-action@v1 with: url: 'https://your-n8n-instance.com/webhook/1da5a6e1-9453-4a65-bbac-a1fed633f6ad' method: 'POST' contentType: 'application/json' data: | { "pull_request_number": ${{ github.event.pull_request.number }}, "repository": "${{ github.repository }}", "branch": "${{ github.event.pull_request.head.ref }}", "base_branch": "${{ github.event.pull_request.base.ref }}" } preventFailureOnNoResponse: true Customize linting rules (optional): Modify the AI Agent's system message to specify your team's linting preferences Adjust file handling if you have specific file types to focus on or ignore Security Considerations When creating your GitHub Personal Access Token, remember to: Choose the minimal permissions needed (repo, workflow, admin:repo_hook) Set an appropriate expiration date Treat your token like a password and store it securely Consider using GitHub's fine-grained personal access tokens for more limited scope As GitHub documentation notes: "Personal access tokens are like passwords, and they share the same inherent security risks." Extending the Template You can enhance this workflow by: Adding Slack notifications when linting fixes are submitted Creating custom linting rules specific to your team's needs Expanding it to handle different types of code quality checks Adding approval steps for more controlled environments This template provides an excellent starting point that you can customize to fit your team's exact workflow and code style requirements.
by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup