by Yaron Been
Analyze competitor Instagram engagement by scraping profiles with Bright Data and scoring them with GPT-5.4. This workflow reads competitor Instagram URLs from a Google Sheet, scrapes their profile data using the Bright Data Instagram Profiles dataset, validates the API response, then sends the data to GPT-5.4 for engagement scoring. Each competitor is scored on follower growth, engagement rate quality, content consistency, and audience authenticity (0-25 each, total 0-100). High-scoring competitors trigger email alerts. How it works: Schedule trigger runs the workflow daily. Reads competitor URLs from the 'competitor_accounts' sheet. Sends each URL to Bright Data's Instagram Profiles API. Validates the API response (handles async/error cases). GPT-5.4 analyzes profile metrics and scores engagement quality. Parses the structured JSON response. Filters by AI confidence (>= 0.7 threshold). Routes high scorers (engagement_score >= 75) to 'top_performers' with Gmail alert. Routes normal results to 'engagement_analysis'. Low confidence results go to 'low_confidence_engagement' for review. Setup: Create a Google Sheet with a 'competitor_accounts' tab containing columns: competitor_name, url, our_industry. Add output tabs: engagement_analysis, top_performers, low_confidence_engagement. Configure all credentials (Bright Data API, OpenAI, Google Sheets, Gmail). Requirements: Bright Data API account (~$0.01-0.03 per profile scrape). OpenAI API account (GPT-5.4 costs ~$0.003-0.008 per call). Google Sheets OAuth2 credentials. Gmail OAuth2 credentials. Notes: Add competitors one per row with their full Instagram profile URL. The engagement_score threshold of 75 can be adjusted in the IF node. Low confidence results may indicate incomplete profile data from Bright Data.
by Le Nguyen
How it works Runs every morning at 8:00 using the Schedule Trigger. Sets a stale_days value and queries Salesforce for Opportunities where Stage_Unchanged_Days__c equals that value and the stage is not Closed Won / Closed Lost. For each “stale” Opportunity, loads full deal details and sends them to an OpenAI model. The model uses the query_soql tool to pull recent Notes, the primary Contact, and the Opportunity Owner, then returns a single JSON object with: a personalized follow-up email for the client, a short SMS template, a concise Slack summary for the sales team, and a ready-to-use Task payload for Salesforce. n8n parses that JSON, sends the email via SMTP, posts the Slack message to your chosen channel, and creates a Salesforce Task assigned to the Opportunity Owner so every stalled deal has a clear next step. Setup steps Estimated setup time: ~30–45 minutes if your Salesforce, OpenAI, SMTP and Slack credentials are ready. Create Stage_Unchanged_Days__c on Opportunity (Salesforce) Field Type: Formula (Number, 0 decimal places) Formula: IF( ISBLANK(LastStageChangeDate), TODAY() - DATEVALUE(CreatedDate), TODAY() - DATEVALUE(LastStageChangeDate) ) This field tracks how many days the Opportunity has been in the current stage. Connect credentials in n8n Salesforce OAuth2 for the Salesforce nodes and the query_soql HTTP Tool. OpenAI (or compatible) credential for the “Message a model” node. SMTP credential for the customer email node. Slack credential for the internal notification node. Configure your follow-up rules In Edit Fields (Set), set stale_days to the threshold that defines a stalled deal (e.g. 7, 14, 30). In Perform a query, optionally refine the SOQL (record types, owners, minimum amount, etc.) to match your pipeline. Update the Send Email SMTP Customer node with your real “from” address and tweak the wording if needed. Point Send Message To Internal Team (Slack) to the right channel or user. Test safely Turn off the Schedule Trigger and run the workflow manually with a few test Opportunities. Inspect the AI output in Message a model and Parse JSON to confirm the structure (email, sms, slack, task.api_body). Check that the email and Slack messages look good and that Salesforce Tasks are created, assigned to the right Owner, and linked to the correct Opportunity. Go live Re-enable the Schedule Trigger. Monitor the first few days to confirm that follow-ups, Slack alerts, and Tasks all behave as expected, then let the automation quietly keep your pipeline clean and moving.
by Cheng Siong Chin
How It Works This workflow automates continuous data integrity monitoring and intelligent alert management across multiple data sources. Designed for data engineers, IT operations teams, and business intelligence analysts, it solves the critical challenge of detecting data anomalies and orchestrating appropriate responses based on severity levels. The system operates on scheduled intervals, fetching data from software metrics APIs and BI dashboards, then merging these sources for comprehensive analysis. It employs AI-powered validation and orchestration agents to detect anomalies, assess severity, and determine optimal response strategies. The workflow intelligently routes alerts based on severity classification, triggering critical notifications via email and Slack for high-priority issues while sending standard reports for routine findings. By maintaining detailed compliance audit logs and preparing executive summaries, it ensures stakeholders receive timely, actionable intelligence while creating audit trails for data quality monitoring initiatives. Setup Steps Configure Schedule Data Integrity Check trigger with monitoring frequency Connect Workflow Configuration node with data source parameters Set up Fetch Software Metrics and Fetch BI Dashboard Data nodes with respective API credentials Configure Merge Data Sources node for data consolidation logic Connect Data Validation Agent with OpenAI/Nvidia API credentials for anomaly detection Set up Orchestration Agent with AI API credentials for severity assessment Configure Check for Anomalies node with routing conditions Connect Route by Severity node with classification logic Prerequisites OpenAI or Nvidia API credentials for AI-powered analysis, API access to software metrics platforms Use Cases SaaS platforms monitoring service health metrics, e-commerce businesses tracking inventory data quality Customization Adjust scheduling frequency for monitoring intervals, modify severity thresholds for alert classification Benefits Reduces mean time to detection by 75%, eliminates manual data quality checks
by Cheng Siong Chin
How It Works MQTT ingests real-time sensor data from connected devices. The workflow normalizes the values and trains or retrains machine learning models on a defined schedule. An AI agent detects anomalies, validates the results for accuracy, and ensures reliable alerts. Detected issues are then routed to dashboards for visualization and sent via email notifications to relevant stakeholders, enabling timely monitoring and response. Setup Steps MQTT: Configure broker connection, set topic subscriptions, and verify data flow. ML Model: Define retraining schedule and specify historical data sources for model updates. AI Agent: Connect Claude or OpenAI APIs and configure anomaly validation prompts. Alerts: Set dashboard URL and email recipients to receive real-time notifications. Prerequisites MQTT broker credentials; historical training data; OpenAI/Claude API key; dashboard access; email service Use Cases IoT sensor monitoring; server performance tracking; network traffic anomalies; application log analysis; predictive maintenance alerts Customization Adjust sensitivity thresholds; swap ML models; modify notification channels; add Slack/Teams integration; customize validation rules Benefits Reduces detection latency 95%; eliminates manual monitoring; prevents false alerts; enables rapid incident response; improves system reliability
by Abdullah
Daily Cyber News Digest to Telegram Workflow Created By: Abdullah Dilshad 📧 iamabdullahdishad@gmail.com Stay informed with automated daily summaries. This workflow aggregates cyber news from multiple trusted sources, uses AI to intelligently select and summarize the top 5 most relevant articles, and delivers a clean, concise digest directly to your Telegram chat every morning at 10:00 AM. What This Workflow Does Collects Data: Fetches cybersecurity news from multiple global APIs. Filters Noise: Uses AI to discard irrelevant updates. Summarizes: Generates short, professional summaries (1–2 sentences). Delivers: Automatically sends a formatted digest to Telegram within message length limits. How It Works Schedule Trigger Runs automatically every day at 10:00 AM (customizable). News Collection Fetches articles using the keyword "cyber" from: GNews NewsAPI Data Processing Merges articles from both sources into a single, unified dataset. AI-Powered Selection OpenAI analyzes all fetched articles. Intelligently selects the Top 5 most relevant cybersecurity stories. Smart Summarization Each article is condensed into 1–2 clear sentences. Includes: Publication date, Source name, and Article link. Telegram Delivery Sends a clean, formatted digest to your specified Telegram chat. Ensures the total message length stays under Telegram’s 4096-character limit. Setup Instructions Get API Keys Sign up for free API keys from GNews.io and NewsAPI.org. Connect Accounts Add your Telegram and OpenAI credentials in n8n. Configure Telegram Enter your Telegram Chat ID in the "Send a Text Message" node. Customize the Schedule Change the trigger time if you prefer delivery at a different hour. Customization & Use Cases This workflow is fully reusable and scalable. You can replace the keyword "cyber" to track any topic relevant to your needs. Example Topics: 🤖 Artificial Intelligence (AI) 💰 Cryptocurrency & Blockchain 🚀 Startups & Venture Capital 📱 Consumer Technology 🏭 Industry-specific updates Note: This workflow is designed to be adapted for individual tracking, team updates, or competitor analysis.
by Cheng Siong Chin
How It Works This workflow automates procurement fraud detection and supplier compliance monitoring for organizations managing complex purchasing operations. Designed for procurement teams, audit departments, and compliance officers, it solves the challenge of identifying fraudulent transactions, contract violations, and supplier misconduct across thousands of purchase orders and vendor relationships. The system schedules continuous monitoring, generates sample transaction data, analyzes patterns through dual AI agents (Price Reasonableness validates pricing against market rates, Delivery Agent assesses fulfillment performance), orchestrates comprehensive risk evaluation through Orchestration Agent, routes findings by severity (critical/high/medium/low), and triggers multi-channel responses: critical issues activate immediate Slack/email alerts with detailed logging; high-priority cases receive escalation workflows; medium/low findings generate routine compliance reports. By combining AI-powered anomaly detection with intelligent routing and coordinated notifications, organizations prevent fraud losses by 75%, ensure vendor compliance, maintain audit trails, and enable procurement teams to focus on strategic sourcing rather than manual transaction reviews. Setup Steps Connect Schedule Trigger for monitoring frequency Configure procurement systems with API credentials Add AI model API keys to Price Reasonableness, Delivery, and Orchestration Agent nodes Define fraud indicators and compliance thresholds in agent prompts based on company policies Link Slack webhooks for critical and high-priority fraud alerts to procurement and audit teams Connect email credentials for stakeholder notifications and escalation workflows Prerequisites Procurement system API access, AI service accounts, market pricing databases for benchmarking Use Cases Invoice fraud detection, bid rigging identification, duplicate payment prevention Customization Modify agent prompts for industry-specific fraud patterns, adjust risk scoring algorithms Benefits Prevents fraud losses by 75%, automates compliance monitoring across unlimited transactions
by Madame AI
Monitor Clutch categories for new agencies to Slack With BrowserAct and Gemini Introduction This workflow automates the discovery of new B2B service providers entering the market. It scrapes a specific category on Clutch.co weekly, standardizes the data using AI, and compares it against a historical database to identify only the fresh "new entrants." These leads are then sent to Slack as a "Hot Alert." Target Audience Sales Development Representatives (SDRs), partnership managers, and lead generation agencies looking for new agencies or service providers before their competitors find them. How it works Scheduling: A Weekly Trigger initiates the scan to ensure regular monitoring of the market. Targeting: A Set node defines the specific Clutch category URL to monitor (e.g., https://clutch.co/developers). Data Extraction: The BrowserAct node runs the "The New Entrant Asset Finder" template. It navigates to the target category and scrapes the current list of companies. Data Cleaning: An AI Agent (using OpenRouter/Gemini) processes the raw scraped data. It fixes formatting issues, such as converting "$10,000+" to integers and splitting "City, Country" strings into separate fields. Staging: The cleaned data is written to a temporary "Second Extraction" sheet in Google Sheets. Change Detection: The workflow retrieves the previous week's data ("Database") and the current week's data. A second AI Agent compares the two lists to identify companies that exist in the new scan but not the old one. Notification: If new companies are found, a Slack node posts a formatted alert with details like "Company Name," "Rate," and "Website." Database Update: The workflow clears the old database and replaces it with the latest scan, establishing a new baseline for the next week's comparison. How to set up Configure Credentials: Connect your BrowserAct, OpenRouter, Google Sheets, and Slack accounts in n8n. Prepare BrowserAct: Ensure the The New Entrant Asset Finder template is active in your BrowserAct library. Prepare Google Sheet: Create a Google Sheet with two tabs: Database (First Extarction) Second Extraction Define Target: Open the Clutch Category Link node and paste the URL of the Clutch category you want to track. Configure IDs: Update the Google Sheets nodes to point to your specific spreadsheet file and the respective tabs mentioned above. Google Sheet Headers To use this workflow, ensure your Google Sheet tabs use the following headers: company_name website_url min_project_value_usd hourly_rate_low hourly_rate_high employees_range city country short_description Requirements BrowserAct Account:* Required for scraping. Template: *The New Entrant Asset Finder**. OpenRouter Account:** Required for cleaning data and detecting changes. Google Sheets:** Acts as the historical database. Slack Account:** Used for receiving lead alerts. How to customize the workflow Change the Source: Modify the Clutch Category Link and the BrowserAct template to scrape a different directory, such as G2, Capterra, or Upwork. Filter Logic: Update the system prompt in the Detect data changes AI node to only alert on companies with a specific hourly rate (e.g., >$100/hr) or employee count. Enrichment: Add a Clearbit or Apollo node after the change detection step to find email addresses for the new companies before sending them to Slack. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video AI-Powered Lead Finder: Target New & Growing Companies (n8n + AI Tutorial)
by Paul Karrmann
This n8n template helps you turn inbound messages into a clean, deduped queue of actionable tickets. It includes Slack and Gmail as ready to use examples, but the key idea is the universal intake normalizer: you can plug in other sources later (forms, webhooks, chat tools, other inboxes) as long as you map them into the same normalized schema. Good to know This workflow sends message content to an LLM for classification. Keep sensitive data out of the prompt, and only process messages you are allowed to process. Costs depend on message volume and length, so truncation and tight filters matter. How it works Collect inbound items (Slack and Gmail are included as examples). Normalize each item into one shared JSON format so every source behaves the same. Deduplicate items using a data table so repeats are skipped. Use an AI agent with structured output to score urgency and importance, produce a summary, and draft a reply. Create a Notion ticket for tracking, and optionally notify Slack for high priority items. Setup steps Connect credentials for Slack, Gmail, Notion, and your LLM provider. Choose your Slack channel and set a Gmail filter that keeps volume manageable. Select your Notion database and ensure properties match the field mappings. Create or select a data table and map the unique ID column for deduplication. Adjust the notification threshold and schedule interval to match your workflow. Requirements Slack workspace access (optional if you swap the source) Gmail access (optional if you swap the source) Notion database for ticket creation LLM API credentials Customising this workflow Add new sources by mapping them into the normalizer schema. Truncate long messages before the AI step to reduce cost. Change categories, scoring, and thresholds to match your operating model.
by Neloy Barman
Self-Hosted This workflow provides a complete end-to-end system for capturing, analyzing, and routing customer feedback. By combining local multimodal AI processing with structured data storage, it allows teams to respond to customer needs in real-time without compromising data privacy. Who is this for? This is designed for Customer Success Managers, Product Teams, and Community Leads who need to automate the triage of high-volume feedback. It is particularly useful for organizations that handle sensitive customer data and prefer local AI processing over cloud-based API calls. 🛠️ Tech Stack Tally.so**: For front-end feedback collection. LM Studio**: To host the local AI models (Qwen3-VL). PostgreSQL**: For persistent data storage and reporting. Discord**: For real-time team notifications. ✨ How it works Form Submission: The workflow triggers when a new submission is received from Tally.so. Multimodal Analysis: The OpenAI node (pointing to LM Studio) processes the input using the Qwen3-VL model across three specific layers: Sentiment Analysis: Evaluates the text to determine if the customer is Positive, Negative, or Neutral. Zero-Shot Classification: Categorizes the feedback into pre-defined labels based on instructions in the prompt. Vision Processing: Analyzes any attached images to extract descriptive keywords or identify UI elements mentioned in the feedback. Data Storage: The PostgreSQL node logs the user's details, the original message, and all AI-generated insights. AI-Driven Routing: The same Qwen3-VL model makes the routing decision by evaluating the classification results and determining the appropriate path for the data to follow. Discord Notification: The Discord node sends a formatted message to the corresponding channel, ensuring the support team sees urgent issues while the marketing team sees positive testimonials. 📋 Requirements LM Studio** running a local server on port 1234. Qwen3-VL-4B** (GGUF) model loaded in LM Studio. PostgreSQL** instance with a table configured for feedback data. Discord Bot Token** and specific Channel IDs. 🚀 How to set up Prepare your Local AI: Open LM Studio and download the Qwen3-VL-4B model. Start the Local Server on port 1234 and ensure CORS is enabled. Disable the Require Authentication setting in the Local Server tab. Configure PostgreSQL: Ensure your database is running. Create a table named customer_feedback with columns for name, email_address, feedback_message, image_url, sentiment, category, and img_keywords. Import the Workflow: Import the JSON file into your n8n instance. Link Services: Update the Webhook node with your Tally.so URL. In the Discord nodes, paste the relevant Channel IDs for your #support, #feedback, and #general channels. Test and Activate: Toggle the workflow to Active. Send a test submission through your Tally form and verify the data appears in PostgreSQL and Discord. 🔑 Credential Setup To run this workflow, you must configure the following credentials in n8n: OpenAI API (Local)**: Create a new OpenAI API credential. API Key: Enter any placeholder text (e.g., lm-studio). Base URL: Set this to your machine's local IP address (e.g., http://192.168.1.10:1234/v1) to ensure n8n can connect to the local AI server, especially if running within a Docker container. PostgreSQL**: Create a new PostgreSQL credential. Enter your database Host, Database Name, User, and Password. If using the provided Docker setup, the host is usually db. Discord Bot**: Create a new Discord Bot API credential. Paste your Bot Token obtained from the Discord Developer Portal. Tally**: Create a new Tally API credential. Enter your API Key, which you can find in your Tally.so account settings. ⚙️ How to customize Refine AI Logic**: Update the System Message in the AI node to change classification categories or sentiment sensitivity. Switch to Cloud AI: If you prefer not to use a local model, you can swap the local **LM Studio connection for any 3rd party API, such as OpenAI (GPT-4o), Anthropic (Claude), or Google Gemini, by updating the node credentials and Base URL. Expand Destinations: Add more **Discord nodes or integrate Slack to notify different departments based on the AI's routing decision. Custom Triggers: Replace the Tally webhook with a **Typeform, Google Forms, or a custom Webhook trigger if your collection stack differs.
by Shreya Bhingarkar
This n8n workflow automates your entire B2B outreach pipeline from lead discovery to personalized cold email delivery. Submit a form, let Apollo find and enrich your leads, review AI-generated emails in your sheet and send them all with one click. How it works Form Trigger** accepts Job Title, Location and Number of Leads to kick off the workflow Apollo** searches for matching people and enriches each lead with email, phone, LinkedIn URL and company data Duplicate check** runs automatically to skip any leads already in your sheet Leads are saved** to Google Sheet with outreach status set to Pending Manual Trigger** runs the email generation section using Groq LLM to write a personalized cold email per lead Generated emails** are saved to the sheet for review before sending Gmail** sends each email and updates the outreach status to Mail Sent How to use Run Trigger 1 — Form to scrape and enrich leads from Apollo Review leads in your Google Sheet Run Trigger 2 — Manual to generate and send cold emails Update the AI Cold Email Writer node with your company details before running Requirements Apollo** account with API Key Google Sheets** account Groq** account with API Key Gmail** account Customising this workflow Replace Groq with OpenAI or any other LLM for email generation Extend with a follow-up sequence to re-engage leads who did not reply
by Wan Dinie
Automated Malaysian Weather Alerts with Perplexity AI, Firecrawl and Telegram This n8n template automates daily weather monitoring by fetching official government warnings and searching for related news coverage, then delivering comprehensive reports directly to Telegram. Use cases include monitoring severe weather conditions, tracking flood warnings across Malaysian states, staying updated on weather-related news, and receiving automated daily weather briefings for emergency preparedness. Good to know Firecrawl free tier allows limited scraping requests per hour. Consider the 3-second interval between requests to avoid rate limits. OpenAI costs apply for content summarization - GPT-4.1 mini balances quality and affordability. After testing multiple AI models (GPT, Gemini), Perplexity Sonar Pro Search proved most effective for finding recent, relevant weather news from Malaysian sources. The workflow focuses on major Malaysian news outlets like Utusan, Harian Metro, Berita Harian, and Kosmo. How it works Schedule Trigger runs daily at 9 AM to fetch weather warnings from Malaysia's official data.gov.my API. JavaScript code processes weather data to extract warning types, severity levels, and affected locations. Search queries are aggregated and combined with location information. Perplexity Sonar Pro AI Agent searches for recent news articles (within 3 days) from Malaysian news channels. URLs are cleaned and processed one by one through a loop to manage API limits. Firecrawl scrapes each news article and extracts summaries from main content. All summaries and source URLs are combined and sent to OpenAI for final report generation. The polished weather report is delivered to your Telegram channel in English. How to use The schedule trigger is set for 9 AM but can be adjusted to any preferred time. Replace the Telegram chat ID with your channel or group ID. The workflow automatically filters out "No Advisory" warnings to avoid unnecessary notifications. Modify the search query timeout and batch processing based on your API limits. Requirements OpenAI API key (get one at https://platform.openai.com) Perplexity API via OpenRouter (get access at https://openrouter.ai) Firecrawl API key (get free tier at https://firecrawl.dev) Telegram Bot token and channel/group ID Customizing this workflow Expand news sources**: Modify the AI Agent prompt to include additional Malaysian news outlets or social media sources. Language options**: Change the final report language from English to Bahasa Malaysia by updating the "Make a summary" system prompt. Alert filtering**: Adjust the JavaScript code to focus on specific warning types (e.g., only severe warnings or specific states). Storage integration**: Connect to Supabase or Google Sheets to maintain a historical database of weather warnings and news. Multi-channel delivery**: Add more notification nodes to send alerts via email, WhatsApp, or SMS alongside Telegram.
by Kumar SmartFlow Craft
🚀 How it works Handles GDPR Article 15 (access) and Article 17 (erasure) requests end-to-end — from inbound email to legally-compliant response — with zero manual intervention and a full audit trail. 📬 Monitors Gmail inbox for incoming data subject requests 🤖 AI Agent classifies the request (access or erasure), extracts the requester email and data subject email with structured JSON output 🗄️ Queries Supabase for all personal data records matching the subject 📋 Queries Airtable CRM for matching contact records 📝 Second AI Agent compiles all found data into a GDPR-compliant HTML report ✉️ Access requests — sends a full data report to the requester 🗑️ Erasure requests — deletes records from both Supabase and Airtable, then sends a deletion confirmation 🔒 Logs every request to Google Sheets with timestamp for your audit trail 🛠️ Set up steps Estimated setup time: ~20 minutes Gmail Trigger — connect Gmail OAuth2; point it at your DSR inbox OpenAI — connect OpenAI API credential (used by both AI Agent nodes) Supabase — connect Supabase API credential; update the table name from users to match your schema Airtable — connect Airtable Personal Access Token; replace YOUR_BASE_ID and YOUR_TABLE_NAME Google Sheets — connect Google Sheets OAuth2; replace YOUR_AUDIT_SHEET_ID; create a tab named DSR Audit Log Follow the sticky notes inside the workflow for per-node guidance 📋 Prerequisites Gmail account receiving GDPR requests OpenAI API key (GPT-4o) Supabase project with a users/contacts table Airtable base with a Contacts table containing an Email field Google Sheets for audit log Custom Workflow Request with Personal Dashboard kumar@smartflowcraft.com https://www.smartflowcraft.com/contact More free templates https://www.smartflowcraft.com/n8n-templates