by Seven Liu
Who’s it for 👥 This template is perfect for content creators, marketers, and researchers managing WeChat public account articles! 🚀 It’s ideal for n8n newcomers or anyone wanting to save time on manual content analysis, especially if you use Google Sheets for tracking. 📊 Whether you’re into AI, 欧阳良宜, or automation, this is for you! 😄 How it works / What it does 🔧 This workflow automates the retrieval, filtering, classification, and summarization of WeChat articles. 🌐 It reads RSS feed links from a Google Sheet, filters articles from the last 10 days ⏳, cleans HTML content 🧹, classifies them as relevant or not 🎯, generates insightful Chinese summaries with AI 🤖, and saves results to Google Sheets and Notion. 📝 Outputs are Slack-formatted for team collaboration! 💬 How to set up 🛠️ Prepare Google Sheets: Use your own documentId (replace the example) and set up sheets "Save Initial Links" (gid=198451233) and "Save Processed Data" (gid=1936091950). 📋 Configure Credentials: Add Google Sheets and OpenAI API credentials—avoid hardcoding keys! 🔐 Set RSS Feed: Update the rss_feed_url in the "RSS Read" node with your WeChat RSS feed. 🌐 Customize AI: Tweak "Relevance Classification" and "Basic LLM Chain" prompts for your topics (e.g., 欧阳良宜, AI). 🎨 Notion (Optional): Swap the databaseId (e.g., 22e79d55-2675-8055-a143-d55302c3c1b1) with your own. 📚 Run Workflow: Trigger manually via the "When clicking ‘Execute workflow’" node. 🚀 Requirements ✅ n8n account with Google Sheets and OpenAI integrations. Access to a WeChat public account RSS feed. Basic JSON and node config knowledge. How to customize the workflow 🎛️ Topic Adjustment: Update categories in "Relevance Classification" for new topics (e.g., "technology", "education"). 🌱 Summary Length: Modify the LLM prompt in "Basic LLM Chain" to adjust length or style. ✂️ Output Destination: Add Slack or Email nodes for more outputs. 📩 Date Filter: Change the "IF (Filter by Date)" condition (e.g., 7 days instead of 10). ⏰ Scalability: Use a "Schedule Trigger" node for automation. ⏳
by Sebastian/OptiLever
Tired of spending HOURS writing product descriptions that don’t rank or convert? This could be your solution. This free Product Description Writer workflow for n8n uses a multi-agent AI system to turn your product list into conversion-focused, SEO-ready copy. It analyzes your product images, identifies key features, and writes optimized titles and descriptions for platforms like Shopify and Google Shopping. It can process your entire catalog in minutes, saving you countless hours of manual work. This workflow is perfect for: 🛒 Shopify stores 🛒 Etsy sellers 🛒 Product managers 🛒 Digital marketers 🛒 Anyone who hates writing product copy manually! How it works This workflow automates the entire product description process in a few high-level steps: Reads Your Products: The workflow starts by reading product data from your specified Google Sheet, including the product name, an image URL, and optional fields like brand voice or target market. Analyzes Product Images: It downloads each product image and uses an AI vision model (GPT-4o-mini) to perform a detailed visual analysis, extracting objective information like materials, colors, features, and structure. Writes Optimized Copy: The visual analysis and your original data are passed to two specialized AI agents. The first drafts a Shopify-optimized title and description, while the second refines it and generates additional SEO-focused copy for Google Merchant Center. Updates Your Spreadsheet: The final, optimized product titles and descriptions for both Shopify and Google are automatically written back to the original Google Sheet. Set up steps Setting up this workflow takes only a few minutes. You will need to configure credentials for the following services: Google Sheets**: To allow the workflow to read your product list and write back the results. OpenAI**: To power the AI agents that analyze images and generate the copy. Detailed instructions and customization tips are included in the sticky notes inside the workflow itself. Benefits Automated Vision-Based Copywriting**: Reduces manual description writing time. Multi-Channel Ready**: Outputs are optimized for both Shopify and Google Merchant Center standards. Brand Alignment**: Uses optional user-provided draft descriptions and brand voice to maintain brand tone. SEO and Conversion Focus**: Titles and descriptions are optimized for both search engines and consumer engagement. Image-Centric Accuracy**: Uses actual product images for accurate attribute extraction, minimizing errors from missing or vague text data. Tips & Customization To adjust brand voice or tone, modify the system prompts in the Shopify and GMC AI agents. To extend the workflow for scheduled runs, add a cron trigger or a Google Sheets "status column" filter. For QA/debugging, consider adding logging nodes to Slack or Discord, or export AI outputs to a review sheet before updating the main sheet. To improve Shopify or GMC field mappings, edit the final Google Sheets update node's column settings. For speed optimization, the batch size in the Loop Over Items node can be adjusted, but be mindful of API rate limits.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by David Ashby
Complete MCP server exposing 3 IPQualityScore API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IPQualityScore API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IPQualityScore API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://ipqualityscore.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Json (3 endpoints) • GET /json/email/{YOUR_API_KEY_HERE}/{USER_EMAIL_HERE}: Email Validation • GET /json/phone/{YOUR_API_KEY_HERE}/{USER_PHONE_HERE}: Phone Validation • GET /json/url/{YOUR_API_KEY_HERE}/{URL_HERE}: Malicious URL Scanner 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IPQualityScore API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Sirhexalot
This workflow facilitates seamless synchronization between Entra (Microsoft Azure AD) and Zammad. It automates the following processes: Fetch Entra Group Members: Retrieves users from a designated Entra group. These users are candidates for synchronization. Create Universal User Object: Extracts key user information, such as email, phone, and name, and formats it for Zammad compatibility. Synchronize with Zammad: Identifies users in Zammad who need updates based on Entra data. Adds new users from Entra to Zammad. Deactivates users in Zammad if they are no longer in the Entra group. Key Features Dynamic Matching**: Compares users from Entra with existing Zammad users based on email and updates records accordingly. Efficient Management**: Automatically creates, updates, or deactivates Zammad users based on their status in Entra. Custom Fields**: Supports custom field mapping, ensuring enriched user profiles in Zammad. Setup Instructions Microsoft Entra Integration: Ensure proper API permissions for accessing Entra groups and members. Configure Microsoft OAuth2 credentials in n8n. Zammad Integration: Set up Zammad API credentials with appropriate access rights. Customize the workflow to include additional fields or map existing fields as needed. Run Workflow: Trigger the workflow manually or set up an automation schedule (e.g., daily sync). Review created/updated/deactivated users in Zammad. Use Cases IT Administration**: Keep your support system in sync with the organization’s Entra data. User Onboarding**: Automatically onboard new hires into Zammad based on Entra groups. Access Management**: Ensure accurate and up-to-date user records in Zammad. Prerequisites Access to an Entra (Azure AD) environment with group data. A Zammad instance with API credentials for user management. A custom field in Zammad User Object (entra_key) of type String. A custom field in Zammad User Object (entra_object_type) of type `Single selection field with two key value pairs user = User contact = Contact` This workflow is fully customizable and can be adapted to your organization’s specific needs. Save time and reduce manual errors by automating your user sync process with this template! If you have found an error or have any suggestions, please report them here on Github.
by WeWeb
This n8n template helps you build a full AI-powered LinkedIn content generator with just a few clicks. Paired with the free WeWeb UI template, it becomes a ready-to-use web app where users can: Add their own OpenAI API key Customize the prompt and define 6 content topics Edit the AI-generated topics Choose when to generate LinkedIn posts, complete with hashtags and an optional image Who This Is For Perfect for marketers, indie hackers, and solopreneurs who want to build their personal brand on LinkedIn while staying in control of what gets posted. 🧠 What Makes This Different Unlike most AI agents, you stay fully in control: You define the tone and focus via the prompt. You choose which topics to keep or modify. You decide when to generate a post. You can build on top of this and create your own SaaS product. It’s also modular and extendable—hook it up to your backend, add user login, or feed AI improvements based on user input. ⚙️ How It Works Triggering Events: The app includes 3 pre-configured triggers, ready to be hooked into your WeWeb frontend. Just update the webhook URLs after duplicating the n8n workflow. Topic Generation: A call is made to OpenAI (GPT-4) to generate topic ideas based on your prompt. Post Creation: Once topics are approved or edited, GPT-4 writes full posts with suggested hashtags. Image Generation (Optional): If enabled, a DALL·E call generates a relevant image. Everything Stays Local: All data and images are handled locally, no cloud storage setup needed. 🧪 Requirements & Setup No fancy infrastructure required. Here’s what helps you get started: Free WeWeb account** (recommended) to use the frontend UI template OpenAI account** with API access (for GPT-4 and DALL·E) n8n account** (self-hosted or cloud) to run the backend workflow The template is completely free to use. Since each user adds their own OpenAI API key, you don't need to worry about usage costs or rate limits on your end. 🔧 Want to Go Further? This setup is beginner-friendly, but developers can: Add user accounts Save post history Feed user feedback back into the prompt logic Launch their own branded version as a SaaS
by David Ashby
Complete MCP server exposing 2 Analytics API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Analytics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Analytics API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Rate_Limit (1 endpoints) • GET /rate_limit/: Retrieve Application Rate Limits 🔧 User_Rate_Limit (1 endpoints) • GET /user_rate_limit/: Retrieve User Rate Limits 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Analytics API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Javier Hita
Who is this for? This workflow is perfect for sales teams, business development professionals, recruitment agencies, and fractional CFO service providers who need to identify and qualify companies actively hiring. Whether you're prospecting for new clients, building a database of potential customers, or researching market opportunities, this automated solution saves hours of manual research while delivering high-quality, AI-analyzed leads. What problem is this workflow solving? Finding qualified prospects in the finance sector is time-consuming and often inefficient. Traditional methods involve: Manually browsing LinkedIn job postings for hours Difficulty distinguishing between genuine opportunities and recruitment spam Inconsistent lead categorization and qualification Risk of contacting the same companies multiple times Lack of structured data for sales team follow-up This workflow automates the entire lead generation process, from data collection to AI-powered qualification, ensuring you focus only on the most promising opportunities. What this workflow does This comprehensive lead generation system performs six key functions: Automated LinkedIn Job Scraping: Uses Apify's reliable LinkedIn Jobs Scraper to extract detailed job postings for finance positions, including company information, job descriptions, and contact details. Smart Data Processing: Removes duplicates, filters companies by size, and structures data for consistent analysis across all leads. Intelligent Lead Categorization: Compares new leads against your existing database to optimize processing and avoid duplicate work. AI-Powered Qualification: Leverages OpenAI's GPT-4 Mini to analyze each lead and determine: Company Category: Consumer companies, Fractional CFO services, Recruiting agencies, or Other Finance Role Validation: Confirms the position is genuinely finance-related Seniority Level: Entry, Mid, Senior, Director, or C-Level classification Job Summary: Concise description for quick sales team review Automated Database Management: Stores qualified leads in Airtable with comprehensive profiles, preventing duplicates while maintaining data integrity. Lead Scoring & Routing: Prioritizes leads based on processing status and qualification results for efficient sales team follow-up. Setup Prerequisites You'll need accounts for three services: Airtable** (Free tier supported) - For lead storage and management Apify** (14-day free trial available) - For LinkedIn job scraping OpenAI** (Pay-per-use) - For AI-powered lead analysis Step 1: Create Required Credentials Apify API Credential Sign up for an Apify account at apify.com Navigate to Settings → Integrations → API tokens Create a new API token In n8n, create a new Apify API credential with your token OpenAI API Credential Create an account at platform.openai.com Generate an API key in the API section In n8n, create a new OpenAI credential with your key Airtable Personal Access Token Go to airtable.com/create/tokens Create a personal access token with the following scopes: data.records:read data.records:write schema.bases:read In n8n, create a new Airtable Personal Access Token credential Step 2: Set Up Airtable Base Create a new Airtable base with the following structure: Table Name: Qualified Leads Required Fields: Company Name (Single line text) Job Title (Single line text) Is Finance Job (Checkbox) Seniority Level (Single select: Entry, Mid, Senior, Director, C-Level) Company Category (Single select: Consumer, Recruiting, Fractional CFO, Other) Job Summary (Long text) Company LinkedIn (URL) Job Link (URL) Posted Date (Date) Location (Single line text) Industry (Single line text) Company Employees (Number) Step 3: Configure the Workflow Import the Workflow: Copy the JSON and import it into your n8n instance Update Credentials: Replace placeholder credential IDs with your actual credential IDs in: "Scrape LinkedIn Jobs" node (Apify credential) "OpenAI GPT-4 Mini" node (OpenAI credential) "Save to Airtable" and "Get Existing Leads" nodes (Airtable credential) Configure Airtable Connection: Update the base ID and table ID in both Airtable nodes Set Search Parameters: In the "Edit Variables" node, configure: linkedinUrls: Your target LinkedIn job search URLs maxEmployees: Maximum company size filter (default: 200) batchSize: Processing batch size for API efficiency (default: 5) Step 4: Test the Workflow Start with a small test by setting count: 50 in the HTTP Request node Use a specific LinkedIn job search URL (e.g., "CFO jobs in New York") Execute the workflow manually and verify results in your Airtable base Review the AI categorization accuracy and adjust prompts if needed How to customize this workflow to your needs Targeting Different Roles Modify the LinkedIn search URLs in the "Edit Variables" node to target different positions: "https://www.linkedin.com/jobs/search/?keywords=Controller" "https://www.linkedin.com/jobs/search/?keywords=Finance%20Director" "https://www.linkedin.com/jobs/search/?keywords=VP%20Finance" Adjusting Company Size Filters Change the maxEmployees parameter to focus on different company segments: Startups: 1-50 employees SMBs: 51-500 employees Enterprise: 500+ employees Customizing AI Analysis Enhance the GPT-4 prompt in the "AI Lead Analyzer" node to include: Industry-specific criteria Geographic preferences Technology stack requirements Company growth stage indicators Integration Options Extend the workflow by adding: Slack notifications** for new qualified leads Email alerts** for high-priority prospects CRM integration** (Salesforce, HubSpot, Pipedrive) Lead enrichment** with additional data sources Scheduling Automation Set up the workflow to run automatically: Daily**: For active prospecting campaigns Weekly**: For ongoing market research Monthly**: For periodic database updates Performance & Cost Optimization API Efficiency**: The workflow processes leads in batches to optimize API usage Smart Deduplication**: Avoids re-processing existing leads to reduce costs Configurable Limits**: Adjust batch sizes and employee count filters based on your needs Expected Costs**: Approximately $0.05-$0.20 per 100 analysed leads (OpenAI costs) Troubleshooting Common Issues: Rate Limiting**: Increase delays between API calls if you encounter rate limits Data Quality**: Review LinkedIn search URLs for relevance to your target market AI Accuracy**: Adjust prompts if categorisation doesn't match your criteria Airtable Errors**: Verify field names match exactly between workflow and base structure Support Resources: Apify LinkedIn Scraper Documentation OpenAI API Documentation Airtable API Reference Transform your lead generation process with this powerful, AI-driven workflow that delivers qualified prospects ready for immediate outreach.
by John Pranay Kumar Reddy
✨ Summary Efficiently monitor Kubernetes environments by sending only unique error logs from Grafana Loki to Slack. Reduces alert fatigue while keeping your team informed about critical log events. 🧑💻 Who’s it for DevOps or SRE engineers running EKS/GKE/AKS Anyone using Grafana Loki and Promtail for centralized logging Teams that want Slack alerts but hate alert spam 🔍 What it does This n8n workflow queries your Loki logs every 5 minutes, filters only the critical ones (error, timeout, exception, etc.), removes duplicate alerts within the batch, and sends clean alerts to a Slack channel with full metadata (pod, namespace, node, container, log, timestamp). 🧠 How it works 🕒 Schedule Trigger Every 5 minutes (customizable) 🌐 Loki HTTP Query Pulls logs from the last 10 minutes Keyword match: error, failed, oom, etc. 🧹 Log Parsing Extracts log fields (pod, container, etc.) Skips empty/malformed results 🧠 Deduplication Removes repeated error messages (within query window) 📤 Slack Notification Sends nicely formatted message to Slack ⚙️ Requirements Tool Notes Loki- Exposed internally or externally Slack App- With chat:write OAuth n8n- Cloud or self-hosted 🔧 How to Set It Up Import the JSON file into n8n Update: Loki API URL (e.g., http://loki-gateway.monitoring.svc.cluster.local) Slack Bearer Token (via credentials) Target Slack channel (e.g., #k8s-alerts) (Optional) Change keywords in the query regex Activate the workflow Ensure n8n pod/container is having access to your kubernetes cluster/pods/namespaces 🛠 How to Customize Want more or fewer keywords? Adjust the regex in the Query Loki for Error Logs node. Need to increase deduplication logic? Enhance the Remove Duplicate Alerts node. Want 5-log summaries every 5 min? Fork this and add a Batch + Slack group sender. Grafana Loki logs to Slack Output
by Kumar Shivam
This workflow automates the restaurant POS (Point of Sale) data management process, facilitating seamless order handling, customer tracking, inventory management, and sales reporting. It retrieves order details, processes payment information, updates inventory, and generates real-time sales reports, all integrated into a centralized system that improves restaurant operations. The workflow integrates various systems, including a POS terminal to gather order data, payment gateways to process transactions, inventory management tools to update stock, and reporting tools like Google Sheets or an internal database for generating sales and performance reports. Who Needs Restaurant POS Automation? This POS automation workflow is ideal for restaurant owners, managers, and staff looking to streamline their operations: Restaurant Owners – Automate order processing, track sales, and monitor inventory to ensure smooth operations. Managers – Access real-time sales data and performance reports to make informed decisions. Staff – Reduce manual work, focusing on providing better customer service while the system handles orders and payments. Inventory Teams – Automatically update inventory levels based on orders and ingredient usage. If you need a reliable and automated POS solution to manage restaurant orders, payments, inventory, and reporting, this workflow minimizes human error, boosts efficiency, and saves valuable time. Why Use This Workflow? End-to-End Automation – Automates everything from order input to inventory updates and sales reporting. Seamless Integration – Connects POS, payment systems, inventory management, and reporting tools for smooth data flow.(if needed) Real-Time Data – Provides up-to-the-minute reports on sales, stock levels, and order statuses. Scalable & Efficient – Supports multiple locations, multiple users, and high order volumes. Step-by-Step: How This Workflow Manages POS Data Collect Orders – Retrieves order details from the POS system, including customer information, ordered items, and payment details. Update Inventory – Decreases inventory levels based on sold items, ensuring stock counts are always accurate. Generate Reports – Compiles sales, revenue, and inventory data into real-time reports and stores them in Google Sheets or an internal database. Track Customer Data – Keeps a log of customer details and order history for better service and marketing insights. Customization: Tailor to Your Needs Multiple POS Systems – Adapt the workflow to work with different POS systems or terminals based on your restaurant setup. Custom Reporting – Modify the reporting format or include specific sales metrics (e.g., daily totals, best-selling items, employee performance). Inventory Management – Adjust inventory updates to include alerts when stock reaches critical levels or needs reordering. Integration with Accounting Software – Connect with platforms like QuickBooks for automated financial tracking. 🔑 Prerequisites POS System Integration – Ensure the POS system can export order data in a compatible format. Payment Gateway API – Set up the necessary API keys for payment processing (e.g., Stripe, PayPal). Inventory Management Tools – Use inventory software or databases that can automatically update stock levels. Reporting Tools – Use Google Sheets or an internal database to store and generate sales and inventory reports. 🚀 Installation & Setup Configure Credentials Set up API credentials for payment gateways and inventory management tools. Import Workflow Import the workflow into your automation platform (e.g., n8n, Zapier). Link POS system, payment gateway, and inventory management systems. Test & Run Process a test order to ensure that data flows correctly through each step. Verify that inventory updates and reports are generated as expected. ⚠ Important Data Privacy – Ensure compliance with data protection regulations (e.g., GDPR, PCI DSS) when handling customer payment and order data. System Downtime – Monitor system performance to ensure that the workflow runs without disruptions during peak hours. Summary This restaurant POS automation workflow integrates order management, payment processing, inventory updates, and real-time reporting, enabling efficient restaurant operations. Whether you are running a single location or a chain of restaurants, this solution streamlines daily tasks, reduces errors, and provides valuable insights, saving time and improving customer satisfaction. 🚀
by Ranjan Dailata
Who is this for? This workflow automates the process of querying Bing's Copilot Search, extracting structured data from the results, summarizing the information, and sending a notification via webhook. It leverages the Microsoft Copilot to retrieve search results and integrates AI-powered tools for data extraction and summarization. What problem is this workflow solving? Data Analysts and Researchers: Who need to gather and summarize information from Bing search results efficiently. Developers and Engineers: Looking to integrate Bing search data into applications or services. Digital Marketers and SEO Specialists: Interested in monitoring search engine results for specific keywords or topics. What this workflow does Manually extracting and summarizing information from search engine results can be time-consuming and error-prone. This workflow automates the process by: Performing Bing searches using Bright Data's Bing Search API. Extracting structured data from the search results. Summarizing the extracted information using AI tools. Sending the summarized data to a specified endpoint via webhook. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perform a Bing Copilot Request node with the prompt you wish to perform the search. Update the Structured Data Webhook Notifier node with the Webhook endpoint of your choice. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Modify Search Queries: Adjust the search terms to target different topics or keywords. Change Data Extraction Logic: Customize the extraction process to capture specific data points from the search results. Alter Summarization Techniques: Integrate different AI models or adjust parameters to change how summaries are generated. Update Webhook Endpoints: Direct the summarized data to different endpoints as required. Schedule Workflow Runs: Set up automated triggers to run the workflow at desired intervals.
by Tony Paul
How it works ++Download the google sheet here++ and replace this with the googles sheet node: Google sheet , upload to google sheets and replace in the google sheets node. Scheduled trigger: Runs once a day at 8 AM (server time). Fetch product list: Reads your “master” sheet (product_url + last known price) from Google Sheets. Loop with delay: Iterates over each row (product) one at a time, inserting a short pause (20 s) between HTTP requests to avoid blocking. Scrape current price: Loads each product_url, extracts the current price via a simple CSS selector. Compare & normalize: Compares the newly scraped price against the “last_price” from your sheet, calculates percentage change, and tags items where price_changed == true. On price change: Send alert: Formats a Telegram message (“Price Drop” or “Price Hike”) and pushes it to your configured chat. Log history: Appends a new row to a separate “price_tracking” tab with timestamp, old price, new price, and % change. Update master sheet: After a 1 min pause, writes the updated current_price back to your “master” sheet so future runs use it as the new baseline. Set up step Google Sheets credentials (~5 min) Create a Google Sheets OAuth credential in n8n. Copy your sheet’s ID and ensure you have two tabs: product_data (columns: product_url, price) price_tracking (columns: timestamp, product_url, last_price, current_price, price_diff_pct, price_changed) Paste the sheet ID into both Google Sheets nodes (“Read” and “Append/Update”). Telegram credentials (~5 min) Create a Telegram Bot token via BotFather. Copy your chat_id (for your target group or personal chat). Add those credentials to n8n and drop them into the “Telegram” node. Workflow parameters (~5 min) Verify the schedule in the Schedule Trigger node is set to 08:00 (or adjust to your preferred run time). In the Loop Over Items node, confirm “Batch Size” is 1 (to process one URL at a time). Adjust the Delay to avoid Request Blocking node if your site requires a longer pause (default is 20 s). In the Parse Data From The HTML Page node, double-check the CSS selector matches how prices appear on your target site. Once credentials are in place and your sheet tabs match the expected column names, the flow should be ready to activate. Total setup time is under 15 minutes—detailed notes are embedded as sticky comments throughout the workflow to help you tweak selectors, change timeouts, or adjust sheet names without digging into code.