by Ranjan Dailata
Who this is for The Crunchbase B2B Lead Discovery Pipeline is designed for sales teams, B2B marketers, business analysts, and data operations teams who need a reliable way to extract, structure, and summarize company information from Crunchbase to fuel lead generation and market intelligence. This workflow is ideal for: Sales Development Reps (SDRs) - Needing structured leads from Crunchbase Marketing Analysts - Generating segmented outreach lists Growth Teams - Identifying trending B2B startups RevOps Teams - Automating company research pipelines Data Teams - Consolidating insights into Google Sheets for dashboards What problem is this workflow solving? Manual extraction of company data from Crunchbase is time-consuming, inconsistent, and often lacks the contextual summary required for sales enablement or growth targeting. This workflow automates the extraction, transformation, summarization, and delivery of Crunchbase company data into structured formats, making it instantly usable for B2B targeting and analysis. It solves: The difficulty of scaling lead discovery from Crunchbase The need to summarize raw textual content for quick insights The lack of integration between web scraping, LLM processing, and storage What this workflow does Markdown to Textual Data Extractor**: Takes raw scraped markdown from Crunchbase and converts it into readable plain text using a basic LLM chain Structured Data Extraction**: Applies a parsing model (OpenAI) to extract structured fields such as company name, funding rounds, industry tags, location, and founding year Summarization Chain**: Generates an executive summary from the raw Crunchbase text using a summarization prompt template Send to Google Sheets**: Adds the structured data and summary into a Google Sheet for team access and further processing Persist to Disk**: Saves both raw and structured data files locally for archiving or further use Webhook Notification**: Sends a structured payload to a webhook endpoint (e.g., Slack, CRM, internal tools) with lead insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs LLM Prompt Customization : Modify the extraction prompt to include additional fields like revenue, social links, leadership team Adjust summarization tone (e.g., executive summary, sales-focused snapshot or marketing digest) File Persistence Store raw markdown, extracted JSON, and summary text separately for audit/debug Webhook Notification Connect to CRM (e.g., HubSpot, Salesforce) via webhook to automatically create leads Send Slack notifications to alert sales reps when a new high-potential company is discovered
by Wikus Bergh
Who is this for? This template is ideal for n8n administrators, automation engineers, and DevOps teams who want to maintain bidirectional synchronization between their n8n workflows and GitHub repositories. It helps teams keep their workflow backups up-to-date and ensures consistency between their n8n instance and version control system. What problem is this workflow solving? Managing workflow versions across n8n and GitHub can become complex when changes happen in both places. This workflow solves that by automatically synchronizing workflows bidirectionally, ensuring that the most recent version is always available in both systems without manual intervention or version conflicts. What this workflow does: Runs on a weekly schedule (every Monday) to check for synchronization needs. Fetches all workflows from your n8n instance and compares them with GitHub repository files. Identifies workflows that exist only in n8n and uploads them to GitHub as JSON backups. Identifies workflows that exist only in GitHub and creates them in your n8n instance. For workflows that exist in both places, compares timestamps and syncs the most recent version: If n8n version is newer β Updates GitHub with the latest workflow If GitHub version is newer β Updates n8n with the latest workflow Automatically handles file naming, encoding/decoding, and commit messages with timestamps. Setup: Connect GitHub: Configure GitHub API credentials in the GitHub nodes. Note: Use a GitHub Personal Access Token (classic) with repo permissions to read and write workflow files. Connect n8n API: Provide your n8n API credentials in the n8n nodes. Check this doc Configure GitHub Details in the Set GitHub Details node: github_account_name: Your GitHub username or organization github_repo_name: The repository name where workflows should be stored repo_workflows_path: The folder path in your repo (e.g., workflows or n8n-workflows) Adjust Schedule: Modify the Schedule Trigger if you want a different sync frequency (currently set to weekly on Mondays). Test the workflow: Run it manually first to ensure all connections and permissions are working correctly. How to customize this workflow to your needs: Change sync frequency**: Modify the Schedule Trigger to run daily, hourly, or on-demand. Add filtering**: Extend the Filter node to exclude certain workflows (e.g., test workflows, templates). Add notifications**: Insert Slack, email, or webhook notifications to report sync results. Implement conflict resolution**: Add custom logic for handling workflows with the same timestamp. Add workflow validation**: Include checks to validate workflow JSON before syncing. Branch management**: Modify to sync to different branches or create pull requests instead of direct commits. Backup retention**: Add logic to maintain multiple versions or archive old workflows. Key Features: Bidirectional sync**: Handles changes from both n8n and GitHub Timestamp-based conflict resolution**: Always keeps the most recent version Automatic file naming**: Converts workflow names to valid filenames Base64 encoding/decoding**: Properly handles JSON workflow data Comprehensive comparison**: Uses dataset comparison to identify differences Automated commits**: Includes timestamps in commit messages for traceability This automated synchronization workflow provides a robust backup and version control solution for n8n workflows, ensuring your automation assets are always safely stored and consistently available across environments.
by DataMinex
π Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram π Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. π― What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: π Bar Charts**: Top 10 busiest airlines by flight volume π₯§ Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) π© Doughnut Charts**: Price range segmentation with average pricing π Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. ποΈ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading β parsing β JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input β Command Detection β CSV Processing β Data Aggregation β Chart Configuration β Image Generation β Telegram Delivery π οΈ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: π Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ βββ flights.csv (download from GitHub dataset above) βοΈ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow π§ Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (lightβdark blue) Doughnut Charts: Price-tier specific colors (greenβpurple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) π Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time π¨ Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } π Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging π Expected Outputs Sample Generated Insights "βοΈ Vistara leads with 350+ flights, capturing 23.4% market share" "π Long-haul flights dominate at 61.1% of total bookings" "π° Budget category (βΉ0-10K) represents 47.5% of all bookings" "π Average prices peak at βΉ14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity π Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team π‘ Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! π
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for:β Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Dataβs MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.
by Anurag Srivastava
π§ AI Prompt Generator Workflow β n8n Documentation Who is this for? This workflow is for AI builders, prompt engineers, developers, marketers, and no-code creators who want to convert rough user input into structured, high-quality prompts for LLMs. Itβs especially useful for tools that rely on precision prompting and want to automate the discovery of intent and constraints. What problem is this workflow solving? / Use case Many users struggle to write effective prompts due to vague ideas or unclear formatting needs. This workflow: Collects structured user input. Dynamically generates clarifying questions. Returns a well-formatted AI prompt based on the user's intent and context. This ensures the generated prompt is useful for downstream AI agents without requiring technical understanding from the end user. What this workflow does Start with a branded form UI The user is shown a styled form with questions like: What do you want to build? What tools can you access? What input can be expected? What output do you expect? Analyze and generate relevant follow-up questions The workflow sends the user's answers to Google Gemini (via LangChain) which outputs 1β3 clarifying questions. These questions are parsed into a dynamic form. Loop through and collect follow-up answers Each follow-up question is shown in a form one at a time to capture additional context. Merge all inputs The base intent and follow-up responses are merged into a single context block. Generate a final AI-ready prompt The prompt generator node formats everything into a clean, six-section structure: <constraints> <role> <inputs> <tools> <instructions> <conclusions> Display the final result The finished prompt is shown in a clean UI where users can easily copy and reuse it. Setup Credentials Required Google Gemini (PaLM) API credentials (already integrated as Google Gemini(PaLM) Api account 2). Form Trigger Ensure the On form submission trigger is exposed via a webhook or public endpoint (e.g. using ngrok or deployed server). Styling Custom CSS is included in all form nodes for a beautiful UI. You can modify this to match your branding. Environment This workflow is compatible with self-hosted n8n or n8n.cloud. Webhooks must be accessible to users who will fill out the form. How to customize this workflow to your needs Change the base questions** Update the BaseQuestions form node to add or remove fields depending on your use case. Modify Gemini prompts** You can edit the system prompt inside PromptGenerator to change tone, output structure, or AI instructions. Change prompt formatting** If you use a different AI agent (like GPT, Claude, or Mistral), adjust the section labels and formatting to suit that agentβs expected input. Send results elsewhere** Add integration nodes after PromptGenerator, such as: Google Docs / Notion (to log prompts) Gmail / Slack (to notify your team) Zapier / Make (to push to other automation flows) Skip follow-up questions (optional)** If your base form collects all needed info, you can bypass the RelevantQuestions form section by modifying conditional logic. Example Output Prompt (Structure) <role> You are an AI assistant that converts videos into LinkedIn posts with a witty tone. </role> <inputs> - A short video (max 5 minutes) - Desired tone: witty - Style: both summary and quotes - Audience: general network </inputs> <tools> You do not have access to APIs or web search. </tools> <instructions> 1. Parse transcript. 2. Extract insights and quotes. 3. Write an engaging, witty LinkedIn post under 3000 characters. </instructions> <constraints> Avoid technical jargon. No generic intros. Make it platform-native. </constraints> <conclusions> Return a LinkedIn-ready post that starts with a hook and ends with hashtags.
by NonoCode
Who is this template for? This workflow template is designed for teams involved in training management and feedback analysis. It is particularly useful for: HR Departments**: Automating the collection and response to training feedback. Training Managers**: Streamlining the process of handling feedback and ensuring timely follow-up. Corporate Trainers**: Receiving direct feedback and taking actions to improve training sessions. This workflow offers a comprehensive solution for automating feedback management, ensuring timely responses, and improving the quality of training programs. How it works This workflow operates with an Airtable trigger but can be easily adapted to work with other triggers like webhooks from external applications. Once feedback data is captured, the workflow evaluates the feedback and directs it to the appropriate channel for action. Tasks are created in Usertask based on the feedback rating, and notifications are sent to relevant parties. Hereβs a brief overview of this n8n workflow template: Airtable Trigger**: Captures new or updated feedback entries from Airtable. Switch Node**: Evaluates the feedback rating and directs the workflow based on the rating. Webhook**: Retrieves the result of a Usertask task. Task Creation**: Creates tasks in Usertask for poor feedback. Creates follow-up tasks for fair to good feedback. Documents positive feedback and posts recognition on LinkedIn for very good to excellent ratings. Notifications**: Sends email notifications to responsible parties for urgent actions. Sends congratulatory emails and posts on LinkedIn for positive feedback. To summarize Flexible Integration**: This workflow can be triggered by various methods like Airtable updates or webhooks from other applications. Automated Task Management**: It creates tasks in Usertask based on feedback ratings to ensure timely follow-up. Multichannel Notifications**: Sends notifications via email and LinkedIn to keep stakeholders informed and recognize successes. Comprehensive Feedback Handling**: Automates the evaluation and response to training feedback, improving efficiency and response time. Instructions: Set Up Airtable: Create a table in Airtable to capture training feedback. Configure n8n: Set up the Airtable trigger in n8n to capture new or updated feedback entries. Set Up Usertask: Configure the Usertask nodes in n8n to create and manage tasks based on feedback ratings. Configure Email and LinkedIn Nodes: Set up the email and LinkedIn nodes to send notifications and post updates. Test the Workflow: Run tests to ensure the workflow captures feedback, creates tasks, and sends notifications correctly. Video : https://youtu.be/U14MhTcpqeY Remember, this template was created in n8n v1.38.2.
by Trung Tran
π§ IT Voice Support Automation Bot β Telegram Voice Message to JIRA ticket with OpenAI Whisper > Automatically process IT support requests submitted via Telegram voice messages by transcribing, extracting structured data, creating a JIRA ticket, and notifying relevant parties. π§βπΌ Whoβs it for Internal teams that handle IT support but want to streamline voice-based requests. Employees who prefer using mobile/voice to report incidents or ask for support. Organizations aiming to integrate conversational AI into existing support workflows. βοΈ How it works / What it does A user sends a voice message to a Telegram bot. The system checks whether itβs an audio message. If valid, the audio is: Downloaded Transcribed via OpenAI Whisper Backed up to Google Drive The transcription and file metadata are merged. The merged content is processed through an AI Agent (GPT) to extract structured request info. A JIRA ticket is created using the extracted data. The IT team is notified via Slack (or other channels). The requester receives a Telegram confirmation message with the JIRA ticket link. If the input is not audio, a polite rejection message is sent. π Key Features Supports voice-based ticket creation Accurate transcription using Whisper Context-aware request parsing using GPT-4.1 mini Fully automated ticket creation in JIRA Notifies both IT and the original requester Cloud backup of original voice messages (Google Drive) π οΈ Setup Instructions Prerequisites | Component | Required | |----------|----------| | Telegram Bot & API Key | β | | OpenAI Whisper / Transcription Model | β | | Google Drive Credentials (OAuth2) | β | | Google Sheets or other storage (optional) | β¬ | | JIRA Cloud API Access | β | | Slack Bot or Webhook | β | Workflow Steps Telegram Voice Message Trigger: Starts the flow when a user sends a voice message. Is Audio Message?: If false β reply "only voice is supported" Download Audio: Download .oga file from Telegram. Transcribe Audio: Use OpenAI Whisper to get text transcript. Backup to Google Drive: Upload original voice file with metadata. Merge Results: Combine transcript and metadata. Pre-process Output: Clean formatting before AI extraction. Transcript Processing Agent: GPT-based agent extracts: Requester name, department Request title & description Priority & request type Submit JIRA Request Ticket: Create ticket from AI-extracted data. Setup Slack / Email / Manual Steps: Optional internal routing or approvals. Inform Reporter via Telegram: Sends confirmation message with JIRA ticket link. π§ How to Customize Replace JIRA with Zendesk, GitHub Issues, or other ticketing tools. Change Slack to Microsoft Teams or Email. Add Notion/Airtable logging. Enhance agent to extract department from user ID or metadata. π¦ Requirements | Integration | Notes | |-------------|-------| | Telegram Bot | Used for input/output | | Google Drive | Audio backup | | OpenAI GPT + Whisper | Transcript & Extraction | | JIRA | Ticketing platform | | Slack | Team notification | Built with β€οΈ using n8n
by NonoCode
Who is this template for? This workflow template is designed for accounting, human resources, and IT project management teams looking to automate the generation of PDF and Word documents. It can be particularly useful for: The accounting department: for generating invoices in PDF format, thus streamlining the invoicing process and payment tracking. The human resources department: for creating employment contracts in PDF, simplifying the administrative management of employees. IT project management teams: for producing Word documents, such as project specifications, to clearly define project requirements and objectives. Example result in mail This PDF and Word document generation workflow offers a practical and efficient solution for automating administrative and document-related tasks, allowing teams to focus on higher-value activities. How it works This workflow currently operates with an n8n form, but you can easily replace this form with a webhook triggered by an external application such as AirTable, SharePoint, DocuWare, etc. Once the configuration information is retrieved, we fill the API request body of JSReport. The body is defined at the time of template creation in JSReport (Example of JSReport usage). Then, in a straightforward manner, we fetch the PDF and send it via email. Here's a brief overview of this n8n workflow template: Link to n8n workflow template presentation To summarize This workflow integrates with an n8n form, but it's flexible to work with various triggering methods like webhooks from other applications such as AirTable, SharePoint, or DocuWare. After configuring the necessary information, it populates the API request body of JSReport, which defines the template in JSReport. Once the template is populated, it retrieves the PDF and sends it via email. In essence, it streamlines the process of generating PDF documents based on user input and distributing them via email. Instructions: Create a JSReport Account: Sign up for a JSReport account to create your PDF template model. Define PDF Template in JSReport: Use JSON data from your system to set up the content of your PDF template in JSReport. Configure HTTP Request in n8n: Use the HTTP Request node in n8n to send a request to JSReport. Set the node's body to the JSON data defining your PDF template. Watch the Video: For detailed setup guidance, watch the setup video. Remember, this template was created in n8n v1.38.2.
by vinci-king-01
Smart Supplier Health Monitor with ScrapeGraphAI Risk Detection and Multi-Channel Alerts π― Target Audience Procurement managers and directors Supply chain risk analysts CFOs and financial controllers Vendor management teams Enterprise risk managers Operations managers Contract administrators Business continuity planners π Problem Statement Manual supplier monitoring is reactive and time-consuming, often missing early warning signs of financial distress that could disrupt your supply chain. This template solves the challenge of proactive supplier health surveillance by automatically monitoring financial indicators, news sentiment, and market conditions to predict supplier risks before they impact your business operations. π§ How it Works This workflow automatically monitors your critical suppliers' financial health using AI-powered web scraping, analyzes multiple risk factors, identifies alternative suppliers when needed, and sends intelligent alerts through multiple channels to ensure your procurement team can act quickly on emerging risks. Key Components Weekly Health Check Scheduler - Automated trigger based on supplier criticality levels Supplier Database Loader - Dynamic supplier portfolio management with risk-based monitoring frequency ScrapeGraphAI Website Analyzer - AI-powered extraction of financial health indicators from company websites Financial News Scraper - Intelligent monitoring of financial news and sentiment analysis Advanced Risk Scorer - Industry-adjusted risk calculation with failure probability modeling Alternative Supplier Finder - Automated identification and ranking of backup suppliers Multi-Channel Alert System - Email, Slack, and API notifications with escalation rules π Risk Analysis Specifications The template performs comprehensive financial health analysis with the following parameters: | Risk Factor | Weight | Score Impact | Description | |-------------|--------|--------------|-------------| | Financial Issues | 40% | +0-24 points | Revenue decline, debt levels, cash flow problems | | Operational Risks | 30% | +0-18 points | Management changes, restructuring, capacity issues | | Market Risks | 20% | +0-12 points | Industry disruption, regulatory changes, competition | | Reputational Risks | 10% | +0-6 points | Negative news, legal issues, public sentiment | Industry Risk Multipliers: Technology: 1.1x (Higher volatility) Manufacturing: 1.0x (Baseline) Energy: 1.2x (Regulatory risks) Financial: 1.3x (Market sensitivity) Logistics: 0.9x (Generally stable) Risk Levels & Actions: Critical Risk**: Score β₯ 75 (CEO/CFO escalation, immediate transition planning) High Risk**: Score β₯ 55 (Procurement director escalation, backup activation) Medium Risk**: Score β₯ 35 (Manager review, increased monitoring) Low Risk**: Score < 35 (Standard monitoring) π’ Supplier Management Features | Feature | Critical Suppliers | High Priority | Medium Priority | |---------|-------------------|---------------|-----------------| | Monitoring Frequency | Weekly | Bi-weekly | Monthly | | Risk Threshold | 35+ points | 40+ points | 50+ points | | Alert Recipients | C-Level + Directors | Directors + Managers | Managers only | | Alternative Suppliers | 3+ pre-qualified | 2+ identified | 1+ researched | | Transition Timeline | 24-48 hours | 1-2 weeks | 1-3 months | π οΈ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Gmail account for email alerts (or alternative email service) Slack workspace with webhook or bot token Supplier database or CRM system API access Basic understanding of procurement processes Step-by-Step Configuration 1. Configure ScrapeGraphAI Credentials Sign up for ScrapeGraphAI API account Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials with your API key Test the connection to ensure proper functionality 2. Set up Email Integration Add Gmail OAuth2 credentials in n8n Configure sender email and authentication Test email delivery with sample message Set up email templates for different risk levels 3. Configure Slack Integration Create Slack webhook URL or bot token Add Slack credentials to n8n Configure target channels for different alert types Customize Slack message formatting and buttons 4. Load Supplier Database Update the "Supplier Database Loader" node with your supplier data Configure supplier categories, contract values, and criticality levels Set monitoring frequencies based on supplier importance Add supplier website URLs and contact information 5. Customize Risk Parameters Adjust industry risk multipliers for your business context Modify risk scoring thresholds based on risk tolerance Configure economic factor adjustments Set failure probability calculation parameters 6. Configure Alternative Supplier Database Populate the alternative supplier database in the "Alternative Supplier Finder" node Add supplier ratings, capacities, and specialties Configure geographic coverage and certification requirements Set suitability scoring parameters 7. Set up Procurement System Integration Configure the procurement system webhook endpoint Add API authentication credentials Test webhook payload delivery Set up automated data synchronization 8. Test and Validate Run test scenarios with sample supplier data Verify ScrapeGraphAI extraction accuracy Check risk scoring calculations and thresholds Confirm all alert channels are working properly Test alternative supplier recommendations π Workflow Customization Options Modify Risk Analysis Add custom risk indicators specific to your industry Implement sector-specific economic adjustments Configure contract-specific risk factors Add ESG (Environmental, Social, Governance) scoring Extend Data Sources Integrate credit rating agency APIs (Dun & Bradstreet, Experian) Add financial database connections (Bloomberg, Reuters) Include social media sentiment analysis Connect to government regulatory databases Enhance Alternative Supplier Management Add automated supplier qualification workflows Implement dynamic pricing comparison Create supplier performance scorecards Add geographic risk assessment Advanced Analytics Implement predictive failure modeling Add supplier portfolio optimization Create supply chain risk heatmaps Generate automated compliance reports π Use Cases Supply Chain Risk Management**: Proactive monitoring of supplier financial stability Procurement Optimization**: Data-driven supplier selection and management Business Continuity Planning**: Automated backup supplier identification Financial Risk Assessment**: Early warning system for supplier defaults Contract Management**: Risk-based contract renewal and negotiation Vendor Diversification**: Strategic supplier portfolio management π¨ Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays between supplier assessments Keep all API credentials secure and rotate them regularly Monitor API usage to manage costs effectively Ensure compliance with data privacy regulations (GDPR, CCPA) Regularly update supplier databases and contact information Review and adjust risk parameters based on market conditions Maintain confidentiality of supplier financial information π§ Troubleshooting Common Issues: ScrapeGraphAI extraction errors: Check API key validity and rate limits Email delivery failures: Verify Gmail credentials and permissions Slack notification failures: Check webhook URL and channel permissions False positive alerts: Adjust risk scoring thresholds and industry multipliers Missing supplier data: Verify website URLs and accessibility Alternative supplier errors: Check supplier database completeness Monitoring Best Practices: Set up workflow execution monitoring and error alerts Regularly review and update supplier information Monitor API usage and costs across all integrations Validate risk scoring accuracy with historical data Test disaster recovery and backup procedures Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Procurement best practices and industry standards Financial risk assessment methodologies Supply chain management resources and tools
by Ranjan Dailata
Who this is for? The Structured Data Extract & Data Mining workflow is crafted for researchers, content analysts, SEO strategists, and AI developers who need to transform semi-structured web data (like markdown content or scraped HTML) into actionable structured datasets. It is ideal for: Content Analysts** - Organizing and mining large volumes of markdown or HTML content. SEO & Trend Researchers** - Exploring topics by location and category. AI Engineers & NLP Developers** - Looking to automate insight extraction from unstructured inputs. Growth Marketers** - Tracking topic-level trends for strategic campaigns. Automation Specialists** - Streamlining workflows from scrape to storage. What problem is this workflow solving? Extracting insights from markdown or HTML documents typically requires manual review, formatting, and parsing. This becomes unscalable when dealing with large datasets or when real-time response is needed. Additionally, trend and topic extraction usually involves external tools, custom scripts, and inconsistent formatting. This workflow solves: Automatic text extraction from markdown or structured content. Location and category-based trend mining with semantic grouping. AI-driven topic extraction and summarization Real-time notification via webhook with rich structured payloads. Persistent storage of mined data to disk for audits or further processing. What this workflow does Receives input: Sets the URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. A Markdown/Text Extractor node parses the content into clean plaintext The cleaned data is passed to Google Gemini to: Identify trends by location and category Extract key topics and themes Format the response into structured JSON The structured insights are sent via Webhook Notification to external systems (e.g., Slack, Web apps, Zapier) The final output is saved to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source** : Update the workflow input to read from Google Sheet or Airbase for dynamically tracking multiple brands or topics. Gemini Prompt Customization** : Extract trends within a custom category (e.g., E-commerce design patterns in the US) Output topics with popularity metrics Structure the output as per your database schema (e.g., [{ topic, trend_score, location }]) Webhook Output** : Send notifications to - Slack β with AI summaries in rich blocks Internal APIs β for use in dashboards Zapier/Make β for multi-step automation Persistence** Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Scrape Web Data with Bright Data and MCP Automated AI Agent workflow is built for professionals who need to automate large-scale, intelligent data extraction by utilizing the Bright Data MCP Server and Google Gemini. This solution is ideal for: Data Analysts - Who require structured, enriched datasets for analysis and reporting. Marketing Researchers - Seeking fresh market intelligence from dynamic web sources. Product Managers - Who want competitive product and feature insights from various websites. AI Developers - Aiming to feed web data into downstream machine learning models. Growth Hackers - Looking for high-quality data to fuel campaigns, research, or strategic targeting. What problem is this workflow solving? Manually scraping websites, cleaning raw HTML data, and generating useful insights from it can be slow, error-prone, and non-scalable. This workflow solves these problems by: Automating complex web data extraction through Bright Dataβs MCP Server. Reducing the human effort needed for cleaning, parsing, and analyzing unstructured web content. Allowing seamless integration into further automation processes. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the URL to perform the web scrapping. Web Scraping (Bright Data): Use Bright Dataβs MCP Server tools to accomplish the web data scrapping with markdown and html format. Store / Output: Save results into disk and also performs a Webhook notification. Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Arunava
This n8n workflow automates replying to Google Play Store reviews using AI. It analyzes each reviewβs sentiment and tone and posts a human-like response β saving time for indie devs, founders, and PMs managing multiple apps. π‘ Use Cases Respond to reviews at scale without sounding robotic Prioritize negative sentiment feedback Maintain consistent tone and support messaging Free up time for teams to focus on product instead of ops π§ How it works Uses the Play Store API to fetch new app reviews Filters out reviews that have already been replied to Analyzes sentiment using OpenAI GPT-4o Passes sentiment and review context to an AI Agent node that crafts a reply Replies are posted to Play Store via Google API (Optional) Logs the reply to Slack for visibility π οΈ Setup Instructions (Sticky notes included in the workflow) 1. HTTPS Node Replace the package name with your appβs package ID Add Google Service Account credentials β Create from Google Cloud Console with access to Play Console β Add to n8n Credential Manager 2. OpenAI Node Add your OpenAI API key β GPT-4o or GPT-4o mini supported β Customize model or instructions if needed 3. AI Agent Node Modify prompt to reflect your app name, tone, and feature set β E.g. polite, witty, casual, support-friendly, etc. β You can add reply conditions or logic for different types of reviews 4. Slack Node (Optional) Configure Slack Webhook or OAuth credentials if you want reply logs β Otherwise, delete the node to simplify the workflow β‘ Requirements Google Play Developer Console access Google Cloud Project with service account OpenAI account (GPT-4o or mini) (Optional) Slack workspace & app for logging π Donβt want to set this up yourself? Iβll do it for you. Just drop me an email: imarunavadas@gmail.com Letβs automate the boring stuff so you can focus on growth. π