by Ranjan Dailata
Who this is for This workflow is designed for: Recruiters, Talent Intelligence Teams, and HR tech builders automating resume ingestion. Developers and data engineers building ATS (Applicant Tracking Systems) or CRM data pipelines. AI and automation enthusiasts looking to extract structured JSON data from unstructured resume sources (PDFs, DOCs, HTML, or LinkedIn-like URLs). What problem this workflow solves Resumes often arrive in different formats (PDF, DOCX, web profile, etc.) that are difficult to process automatically. Manually extracting fields like candidate name, contact info, skills, and experience wastes time and is prone to human error. This workflow: Converts any unstructured resume into a structured JSON Resume format. Ensures the output aligns with the JSON Resume Schema. Saves the structured result to Google Sheets and local disk for easy tracking and integration with other tools. What this workflow does The workflow automates the entire resume parsing pipeline: Step 1: Trigger Starts manually with an Execute Workflow button. Step 2: Input Setup A Set Node defines the resume_url (e.g., a hosted resume link). Step 3: Resume Content Extraction Sends the URL to Thordata Universal API, which retrieves the web content, cleans HTML/CSS, and extracts structured text and metadata. Step 4: Convert HTML → Markdown Converts the HTML content into Markdown to prepare for AI model parsing. Step 5: JSON Resume Builder (AI Extraction) Sends the Markdown to OpenAI GPT-4.1-mini, which extracts: basics: name, email, phone, location work: companies, roles, achievements education: institutions, degrees, dates skills, projects, certifications, languages, and more The output adheres to the JSON Resume Schema. Step 6: Output Handling Saves the final structured resume: Locally to disk Appends to a Google Sheet for analytics or visualization. Setup Prerequisites n8n instance (self-hosted or cloud) Credentials for: Thordata Universal API (HTTP Bearer Token). First time users Signup OpenAI API Key Google Sheets OAuth2 integration Steps Import the provided workflow JSON into n8n. Configure your Thordata Universal API Token under Credentials → HTTP Bearer Auth. Connect your OpenAI account under Credentials → OpenAI API. Link your Google Sheets account (used in the Append or update row in sheet node). Replace the resume_url in the Set Node with your own resume file or hosted link. Execute the workflow. How to customize this workflow Input Sources Replace the Manual Trigger with: A Webhook Trigger to accept resumes uploaded from your website. A Google Drive / Dropbox Trigger to process uploaded files automatically. Output Destinations Send results to: Notion, Airtable, or Supabase via API nodes. Slack / Email for recruiter notifications. Language Model Options You can upgrade from gpt-4.1-mini → gpt-4.1 or a custom fine-tuned model for improved accuracy. Summary Unstructured Resume Parser with Thordata Universal API + OpenAI GPT-4.1-mini — automates the process of converting messy, unstructured resumes into clean, structured JSON data. It leverages Thordata’s Universal API for document ingestion and preprocessing, then uses OpenAI GPT-4.1-mini to extract key fields such as name, contact details, skills, experience, education, and achievements with high accuracy.
by Ayis Saliaris Fasseas
How It Works 1.Gmail Trigger Continuously monitors your Gmail inbox for new messages. Captures the email’s subject, body, and metadata. Sends the extracted content to the Email Content Classifier. 2.Email Content Classification The Email Content Classifier analyzes the email content using natural language processing. Compares the message against predefined Gmail labels: Ads Work Personal Financial Other (fallback label) Users can add or rename categories to match their specific needs. Uses context, tone, and keywords to determine the most accurate label. 3.Applying Gmail Labels Sends the classification result to the corresponding Gmail label node. Automatically applies the matching Gmail label in your inbox. If the classifier cannot confidently match the message, the Other label is used as a fallback. Setup Steps Connect Gmail Accounts Connect your Gmail account in the Gmail Trigger and in each Gmail label node. Configure the Email Content Classifier Map the incoming Gmail message body to inputText. Ensure the classifier node has access to a language model credential (Anthropic or other). Test the Workflow Send a few sample emails to yourself to confirm that labels are correctly applied. Tweak Categories if Needed Adjust category names in the classifier node to match your Gmail labels exactly. Customization Add or rename categories in the classifier to reflect your specific email types. Create corresponding Gmail label nodes for each new category. Expand or modify categories as your workflow evolves to improve organization and efficiency. Use Cases Automatic inbox organization and sorting. Separation of work, personal, financial, and promotional emails. Improved productivity by making important emails easier to locate. Custom categorization for specialized workflows. Troubleshooting Tips Emails not being labeled → check API permissions and message ID references. Wrong label assigned → update classifier examples or refine category descriptions. Classifier not returning a category → confirm fallback category “Other” is configured. Workflow not triggering → reconnect Gmail Trigger authentication and ensure the workflow is active.
by Davide
This workflow automates the process of transforming user-submitted photos (also bad selfie) into professional CV and LinkedIn headshots using the Nano Banana Pro AI model. | From selfie | To CV/Linkedin Headshot | |:----------------:|:-----------------------------------------:| | | | Key Advantages 1. ✅ Fully Automated Professional Image Enhancement From receiving a photo to delivering a polished LinkedIn-style headshot, the workflow requires zero manual intervention. 2. ✅ Seamless Telegram Integration Users can simply send a picture via Telegram—no need to log into dashboards or upload images manually. 3. ✅ Secure Access Control Only the authorized Telegram user can trigger the workflow, preventing unauthorized usage. 4. ✅ Reliable API Handling with Auto-Polling The workflow includes a robust status-checking mechanism that: Waits for the Fal.ai model to finish Automatically retries until the result is ready Minimizes the chance of failures or partial results 5. ✅ Flexible Input Options You can run the workflow either: Via Telegram Or manually by setting the image URL if no FTP space is available This makes it usable in multiple environments. 6. ✅ Dual Storage Output (Google Drive + FTP) Processed images are automatically stored in: Google Drive (organized and timestamped) FTP (ideal for websites, CDN delivery, or automated systems) 7. ✅ Clean and Professional Output Thanks to detailed prompt engineering, the workflow consistently produces: Realistic headshots Studio-style lighting Clean backgrounds Professional attire adjustments Perfect for LinkedIn, CVs, or corporate profiles. 8. ✅ Modular and Easy to Customize Each step is isolated and can be modified: Change the prompt Replace the storage destination Add extra validation Modify resolution or output formats How It Works The workflow supports two input methods: Telegram Trigger Path: Users can send photos via Telegram, which are then processed through FTP upload and transformed into professional headshots. Manual Trigger Path: Users can manually trigger the workflow with an image URL, bypassing the Telegram/FTP steps for direct processing. The core process involves: Receiving an input image (from Telegram or manual URL) Sending the image to Fal.ai's Nano Banana Pro API with specific prompts for professional headshot transformation Polling the API for completion status Downloading the generated image and uploading it to both Google Drive and FTP storage Using a conditional check to ensure processing is complete before downloading results Set Up Steps Authorization Setup: Replace in the "Sanitaze" node with your actual Telegram user ID Configure Fal.ai API key in the "Create Image" node (Header Auth: Authorization: Key YOURAPIKEY) Set up Google Drive and FTP credentials in their respective nodes Storage Configuration: In the "Set FTP params" node, configure: ftp_path: Your server directory path (e.g., /public_html/images/) base_url: Corresponding base URL (e.g., https://website.com/images/) Configure Google Drive folder ID in the "Upload Image" node Input Method Selection: For Telegram usage: Ensure Telegram bot is properly configured For manual usage: Set the image URL in the "Fix Image Url" node or use the manual trigger API Endpoints: Ensure all Fal.ai API endpoints are correctly configured in the HTTP Request nodes for creating images, checking status, and retrieving results File Naming: Generated files use timestamp-based naming: yyyyLLddHHmmss-filename.ext Output format is set to PNG with 1K resolution The workflow handles the complete pipeline from image submission through AI processing to storage distribution, with proper error handling and status checking throughout. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Nitin Garg
Turn discovery call forms into polished, personalized proposals in seconds. This workflow captures prospect information via Typeform, uses GPT-4 to write compelling proposal content, and automatically creates professional PandaDoc documents with pricing tables. Who is this for? Freelancers, consultants, agencies, and service businesses who want to: Stop spending hours writing proposals manually Respond to prospects faster with professional documents Maintain consistent proposal quality at scale What problem does it solve? Writing proposals is time-consuming and inconsistent. This workflow automates the entire process—from form submission to ready-to-send document—while keeping your proposals personalized and professional. How it works Prospect fills out your Typeform discovery questionnaire Workflow validates required fields (email, company name) AI automatically selects the right template based on budget and project complexity GPT-4 writes personalized proposal content tailored to the prospect's challenges GPT-4 generates realistic project milestones PandaDoc creates a professional document with pricing table You receive a Slack notification with a direct link to review Template selection logic The workflow intelligently routes to the appropriate template: Quick Quote** → Budget under $2,500 AND simple project Standard Proposal** → Budget $2,500+ OR complex project What you get Personalized proposal content written by GPT-4 Challenge summary that shows you understand their pain points Specific benefits with metrics (time savings, cost savings) 4-phase project timeline with realistic milestones Professional PandaDoc document with pricing table Slack notifications for successes and errors Setup time: 15-20 minutes Setup steps Add your OpenAI API credential (select in the 3 GPT nodes) Add your Typeform API credential Add your PandaDoc API credential (HTTP Header Auth with your API key) Create a Typeform with discovery questions (see below) Create 2 PandaDoc templates (Quick Quote + Standard Proposal) Update the Config node with your company info and template IDs Add your Slack webhook URL to the Config node Typeform questions needed Your discovery form should include these fields: What's your company name? What industry are you in? What's your biggest operational challenge? What would the ideal solution look like? What tools/platforms do you currently use? How complex is this project? (Simple / Moderate / Complex) What's your estimated budget? (Under $1,500 / $1,500-$2,500 / $2,500-$5,000 / $5,000-$10,000 / $10,000+) First name Last name Email address PandaDoc template tokens Your PandaDoc templates should include these placeholder tokens: Client.Company, Client.FirstName, Client.LastName Client.ChallengeSummary Project.Total, Project.Deposit Project.TimeSavings, Project.CostSavings Timeline.Phase1-4, Timeline.Phase1Date-4Date Impact.Bullet1-3 Proposal.ExpiryDate, Document.Date Requirements OpenAI API account (~$0.05-0.15 per proposal) Typeform account (free tier works) PandaDoc account (API access required) Slack workspace (for notifications) Good to know Processing time: 15-30 seconds per proposal All errors are caught and sent to Slack so you can follow up manually The Config node centralizes all settings—easy to customize GPT prompts are editable to match your writing style Budget threshold is adjustable (default: $2,500) Quick Quote proposals expire in 7 days, Standard in 14 days
by Angel Menendez
Who it's for This workflow is ideal for YNAB users who frequently shop on Amazon and want their transaction memos to automatically show itemized purchase details. It's especially helpful for people who import bank transactions into YNAB and want to keep purchase records tidy without manual entry. How it works The workflow triggers on a set schedule, via a webhook, or manually. It retrieves all unapproved transactions from your YNAB budget, filters for Amazon purchases with empty memo fields, and processes each transaction individually. Using Gmail, it searches for matching Amazon emails (within ±5 days of the transaction date) and sends the email data to an AI agent powered by OpenAI. The AI extracts product names and prices, generating a concise memo line (up to 499 characters). If no valid purchase info is found, a fallback message is added instead. A 5-second delay prevents API rate limiting. How to set up Connect your YNAB account with valid API credentials. Connect Gmail with OAuth2 authentication. Add your OpenAI (or other LLM) API credentials. Configure the schedule trigger or use manual/webhook start. Run the workflow and monitor execution logs in n8n. Requirements YNAB API credentials Gmail OAuth2 connection OpenAI API key (or another compatible AI model) How to customize You can change the AI model (e.g., Gemini or Claude) or add HTML-to-Markdown conversion to lower token costs. Adjust the wait node delay to fit your API rate limits or modify the email date range for greater accuracy. Security note: Never store or share API keys or personal email data directly in the workflow. Use credential nodes to manage sensitive information securely.
by Masaki Go
About This Template This workflow turns complex data or topics sent via LINE into beautiful, easy-to-understand Infographics. It combines Gemini (to analyze data and structure the visual layout) and Nano Banana Pro (accessed via Kie.ai API) to generate high-quality, data-rich graphics (Charts, timelines, processes). How It Works Input: User sends a topic or data points via LINE (e.g., "Japan's Energy Mix: 20% Solar, 10% Wind..."). Data Visualization Logic: Gemini acts as an Information Designer, deciding the best chart type (Pie, Bar, Flow) and layout for the data. Render: Nano Banana generates a professional 3:4 Vertical Infographic. Smart Polling: The workflow uses a loop to check the API status every 5 seconds, ensuring it waits exactly as long as needed. Delivery: Uploads to S3 and sends the visual report back to LINE. Who It’s For Social Media Managers needing quick visual content. Educators and presenters summarizing data. Consultants creating quick visual reports on the go. Requirements n8n** (Cloud or Self-hosted). Kie.ai API Key** (Nano Banana Pro). Google Gemini API Key**. AWS S3 Bucket** (Public access). LINE Official Account**. Setup Steps Credentials: Configure Header Auth for Kie.ai and your other service credentials. Webhook: Add the production URL to LINE Developers console.
by Websensepro
Overview Stop applying manually. This workflow acts as your personal AI recruiter, automating the end-to-end process of finding high-quality jobs, tailoring your resume, and preparing personalized outreach emails to decision-makers. What this workflow does Scrapes Real-Time Jobs:** Uses Apify to pull live job listings from LinkedIn based on your specific keywords (e.g., "AI Automation"). Smart Filtering:** Uses GPT-4o-mini to analyze job descriptions against your skills and automatically discards roles that aren't a good fit. Hyper-Personalized Resume:** Uses GPT-4o to rewrite your "Master Resume" specifically for the target job description. Document Generation:** Creates a new Google Doc with the tailored resume and automatically sets sharing permissions. Decision Maker Enrichment:** Uses Anymail Finder to locate the verified email address of the Company CEO or Hiring Manager. Cold Email Draft:** Generates a personalized pitch in Gmail (Drafts folder) with the link to your custom resume attached. Setup Requirements To run this workflow, you will need to set up credentials in n8n for the following services. Please ensure you use n8n credentials and do not hardcode API keys into the HTTP nodes: Google Drive & Docs:** To read your master resume and create new application files. Apify Account:** To run the LinkedIn Job Scraper actor. OpenAI API Key:** For logic (GPT-4o-mini) and writing (GPT-4o). Anymail Finder API:** To find contact email addresses. Gmail:** To create the draft emails. How to use Upload Resume: Paste your "Master Resume" text into the first Google Docs node or connect your existing file. Configure Credentials: Add your API keys in the n8n credentials section for all services listed above. Set Search Criteria: Update the JSON body in the Apify node with your desired LinkedIn job search URL. Run: Execute the workflow and watch your drafts folder fill up with ready-to-send applications.
by Matt F.
🎯 Automatically Create and Post Engaging Clips (with Audience Retention Videos) from Podcasts Using AI! 🚀 Effortlessly transform long-form podcast content into highly engaging, viral-read clips with this end-to-end automation template. Designed for content creators already monetizing on TikTok/YouTube/Instagram/Twitter and those looking to start earning from the platform, this workflow streamlines the process of extracting highlights, editing clips, and posting to all your social media, allowing you to maximize reach while minimizing manual effort. Key Features 🔹 AI-Powered Podcast Highlight Extraction Automatically identifies the best moments from any podcast video, ensuring each clip is engaging and shareable. 🔹 Smart Video Editing & Captioning Combines podcast highlights with a copyright-free attention retainer video (e.g., Minecraft parkour, GTA 5 gameplay) for increased audience retention. Auto-generated captions make clips more dynamic and accessible. 🔹 Automated Title Generation A Large Language Model (LLM) analyzes the clips to generate compelling titles, optimized for TikTok/YouTube/Instagram/Twitter’s algorithm. 🔹 Hands-Free Multi Platform Posting Seamlessly schedules and automatically posts clips to your Soicual Media accounts at defined intervals, keeping your audience engaged without manual uploads. 🔹 Fully Automated Workflow From video download to content publishing 100% FREE, this template eliminates the need for time-consuming video editing, helping you scale your content strategy effortlessly, without having to pay for multiple subscriptions tediously. Simply find a podcast you like and a cool Minecraft parkour (or any engaging) video, send their YouTube URLs, and let the automation handle everything—from video downloading and audio processing to highlight extraction, editing, captions, and publishing. How It Works (Step-by-Step Guide) 1️⃣ Provide the YouTube URLs One for the main podcast video (where highlights will be extracted). One for the background attention retainer video (e.g., Minecraft parkour, GTA 5 gameplay). 2️⃣ Automation Downloads and Processes the Videos Downloads both videos. Extracts audio from the podcast for analysis. 3️⃣ AI Analyzes and Extracts Key Highlights Detects the most engaging moments from the podcast. 4️⃣ Creates Fully Edited Clips Merges podcast highlights with the attention retainer video. Generates captions automatically. 5️⃣ Optimizes for Social Media Uses AI to generate a compelling title for each clip. 6️⃣ Posts to your Social Media Channels Automatically Uploads clips at your preferred intervals with zero manual effort. Who Is This For? ✅ Content creators already making money on TikTok/YouTube/Instagram/Twitter ✅ People looking to start earning with TikTok/YouTube/Instagram/Twitter ✅ Podcasters wanting to repurpose content into bite-sized, viral clips Get Started Today! 🚀 This AI-driven automation is perfect for scaling your TikTok/YouTube/Instagram/Twitter content effortlessly. To use this workflow, you’ll just need free accounts on Assembly, Andynocode, and Upload-Posts. Last Update: 4th of December, 2025*
by Amit Kumar
Overview This workflow automatically generates short-form AI videos using both OpenAI Sora 2 Pro and Google Veo 3.1, enhances your idea with Google Gemini, and publishes content across multiple platforms through Blotato. It’s perfect for creators, brands, UGC teams, and anyone building a high-frequency AI video pipeline. You can turn a single text idea into fully rendered videos, compare outputs from multiple AI models, and publish everywhere in one automated flow. Good to know Generating Sora or Veo videos may incur API costs depending on your provider. Video rendering time varies by prompt complexity. Sora & Veo availability depends on region and account access. Blotato must be connected to your social accounts before publishing. The workflow includes toggles so you can turn Sora, Veo, or platforms on/off easily. How it works Your text idea enters through the Chat Trigger. Google Gemini rewrites your idea into a detailed, high-quality video prompt. The workflow splits into two branches: Sora Branch: Generates video via OpenAI Sora 2 Pro, downloads the MP4, and uploads/publishes to YouTube, TikTok, and Instagram. Veo Branch: Generates a video using Google Veo 3.1 (via Wavespeed), retrieves the output link, emails it to you, and optionally uploads it to Blotato for publishing. A Config – Toggles node lets you enable or disable models and platforms. Optional Google Sheets logging can store video history and metadata. How to use Send a message to the Chat Trigger to start the workflow. Adjust toggles to choose whether you want Sora, Veo, or both. Add or remove publishing platforms inside the Blotato nodes. Check your email for Veo results or monitor uploads on your social accounts. Ideal for automation, batch content creation, and AI-powered video workflows. Requirements Google Gemini** API key (for prompt enhancement) OpenAI Sora 2** API key Wavespeed (Veo 3.1)** API key Blotato** account + connected YouTube/TikTok/Instagram channels Gmail OAuth2** (for sending video result emails) Google Sheets** (optional logging) Customizing this workflow Add a title/description generator for YouTube Shorts. Insert a thumbnail generator (image AI model). Extend logging with Sheets or a database. Add additional platforms supported by Blotato. Use different prompt strategies for cinematic, viral, or niche content styles.
by SOLOVIEVA ANNA
Overview This workflow automatically reads school-related emails from Gmail, uses AI to understand what each email is about, and then organizes everything into Google Drive and Google Calendar. It classifies messages into schedules, “what to bring” lists, general notices, and contact information, creates calendar events when needed, saves text files in Drive, and sends you a daily reminder email about tomorrow’s important events. Email Auto-Triage and Organizat… Email Auto-Triage and Organization Hub Who this is for Parents or caregivers who get lots of school emails and want everything organized automatically Busy families who often forget dates, deadlines, or 持ち物 (things to bring) Anyone who wants school communication stored in a structured, searchable way in Drive and Calendar How it works Trigger: Gmail watch for new emails A Gmail Trigger node watches your inbox and starts the workflow whenever a new email arrives. It then loads the full message content (subject, body, metadata). Email Auto-Triage and Organizat… AI classification and extraction The email text is sent to an AI model, which returns a structured JSON object with: category: “Schedule”, “What to Bring”, “Notice”, or “Contacts” eventTitle, eventDescription, eventDate (ISO format) itemsToBring, contacts, subject, id, and hasAttachments This turns messy school emails into clean structured data. Email Auto-Triage and Organizat… Routing by category A Switch node routes each email based on its category and whether it has attachments: Schedule / What to Bring → create a calendar event and also save a notice file Notice → save a notice file only Any email with attachments → send to the attachment branch for optional photo storage Email Auto-Triage and Organizat… Save notices to Google Drive For all categorized emails, the workflow creates a text file in Google Drive containing: Title, date, category, items to bring, and a short description of the event or notice. Email Auto-Triage and Organizat… Create calendar events For “Schedule” and “What to Bring” emails, the workflow builds a summary and description (including 持ち物) and creates a Google Calendar event. If no end time is given, it defaults to one hour after the start. Email Auto-Triage and Organizat… Save photo attachments (optional) If the email has image attachments, the workflow: Downloads the attachments from Gmail Filters to only image files Saves the photos in a specified Google Drive folder, using the original file name Email Auto-Triage and Organizat… Extract and archive contact information The workflow also pulls out the sender’s contact info (From), links it to the email subject and timestamp, and saves it as a separate contact text file in Google Drive for easy reference. Email Auto-Triage and Organizat… Daily reminder for tomorrow’s events Every morning at a set time, a Schedule Trigger runs: It fetches all events from Google Calendar for “tomorrow” Filters down to events whose description includes “持ち物” Sends you an email summarizing tomorrow’s events and what you need to bring, so you can prepare in advance. Email Auto-Triage and Organizat… How to set up Connect your Gmail, Google Calendar, and Google Drive credentials in the respective nodes. In the Workflow Configuration node, set: photosFolderId – Drive folder for saved photos noticesFolderId – Drive folder for notice text files contactsFolderId – Drive folder for contact text files reminderEmail – email address that will receive the daily reminder Make sure the Gmail Trigger is pointing to the correct mailbox and is set to poll as often as you like. Confirm that the Google Calendar node uses the calendar where you want school events to appear. Turn the workflow on and test it with a few real school emails (schedules, what to bring, general notices). Email Auto-Triage and Organizat… Customization ideas Adjust the AI prompt in Extract Email Info to better match your school’s typical email style or to add more categories. Change the logic for calendar events (all-day events, different default times, or additional fields like location). Modify file naming patterns or folder structure in Google Drive (e.g., separate folders per child, per school year, or per class). Add logging to Google Sheets for a timeline view of all school communication. Forward or mirror important events/notices to other tools such as Slack, Notion, or a family LINE group.
by Avkash Kakdiya
How it works This workflow automates the complete employee leave approval process from submission to final resolution. Employees submit leave requests through a form, which are summarized professionally using AI and sent for approval via email. The workflow waits for the approver’s response and then either sends an approval confirmation or schedules a clarification discussion automatically. All communication is handled consistently with no manual follow-ups required. Step-by-step Step 1: Capture leave request, generate summary, and request approval** On form submission – Captures employee details, leave dates, reason, and task handover information. AI Agent – Generates a professional, manager-ready summary of the leave request. OpenAI Chat Model – Provides the language model used to generate the summary. Structured Output Parser – Extracts the email subject and HTML body from the AI response. Send message and wait for response – Emails the summary to the approver and pauses the workflow until approval or rejection. If – Routes the workflow based on the approval decision. Step 2: Notify employee or schedule discussion automatically** Approved path Send a message – Sends an official leave approval email to the employee. Clarification or rejection path Booking Agent – Determines the next business day and finds the first available 10-minute slot. OpenAI – Applies scheduling logic to select the earliest valid slot. Get Events – Fetches existing calendar events to avoid conflicts. Check Availability – Confirms free time within working hours. Output Parser – Extracts the final meeting start time. Send a message1 – Emails the employee with the scheduled discussion details. Why use this? Eliminate manual approval follow-ups and email back-and-forth Ensure consistent, professional communication for every leave request Automatically handle both approvals and clarification scenarios Reduce manager effort with AI-generated summaries Schedule discussions without manual calendar coordination
by vinci-king-01
Breaking News Aggregator with Telegram and Redis ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow monitors selected government websites, regulatory bodies, and legal-news portals for new or amended regulations relevant to specific industries. It scrapes the latest headlines, compares them against previously recorded items in Redis, and pushes real-time compliance alerts to a Telegram channel or chat. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) ScrapeGraphAI community node installed Redis server accessible from n8n Telegram Bot created via BotFather (Optional) Cron node if you want fully automated scheduling instead of manual trigger Required Credentials ScrapeGraphAI API Key** – Enables ScrapeGraphAI scraping functionality Telegram Bot Token** – Allows n8n to send messages via your bot Redis Credentials** – Host, port, and (if set) password for your Redis instance Redis Setup Requirements | Key Name | Description | Example | |----------|-------------|---------| | latestRegIds | Redis Set used to store hashes/IDs of the most recent regulatory articles processed | latestRegIds | > Hint: Use a dedicated Redis DB (e.g., DB 1) to keep workflow data isolated from other applications. How it works This workflow monitors selected government websites, regulatory bodies, and legal-news portals for new or amended regulations relevant to specific industries. It scrapes the latest headlines, compares them against previously recorded items in Redis, and pushes real-time compliance alerts to a Telegram channel or chat. Key Steps: Manual Trigger / Cron**: Starts the workflow manually or on a set schedule (e.g., daily at 06:00 UTC). Code (Define Sources)**: Returns an array of URL objects pointing to regulatory pages to monitor. SplitInBatches**: Iterates through each source URL in manageable chunks. ScrapeGraphAI**: Extracts article titles, publication dates, and article URLs from each page. Merge (Combine Results)**: Consolidates scraped items into a single stream. If (Deduplication Check)**: Verifies whether each article ID already exists in Redis. Set (Format Message)**: Creates a human-readable Telegram message string. Telegram**: Sends the formatted compliance alert to your chosen chat/channel. Redis (Add New IDs)**: Stores the article ID so it is not sent again in the future. Sticky Note**: Provides inline documentation inside the workflow canvas. Set up steps Setup Time: 10-15 minutes Install community nodes: In n8n, go to Settings → Community Nodes and install n8n-nodes-scrapegraphai. Create credentials: a. Telegram → Credentials → Telegram API → paste your bot token. b. Redis → Credentials → Redis → fill host, port, password, DB. c. ScrapeGraphAI → Credentials → ScrapeGraphAI API → enter your key. Configure the “Define Sources” Code node: Replace the placeholder URLs with the regulatory pages you need to monitor. Update Telegram chat ID: Open any chat with your bot and use https://api.telegram.org/bot<token>/getUpdates to find the chat.id. Insert this value in the Telegram node. Adjust frequency: Replace the Manual Trigger with a Cron node (e.g., daily 06:00 UTC). Test the workflow: Execute once manually; confirm messages appear in Telegram and that Redis keys are created. Activate: Enable the workflow so it runs automatically according to your schedule. Node Descriptions Core Workflow Nodes: Manual Trigger** – Allows on-demand execution during development/testing. Code (Define Sources)** – Returns an array of page URLs and meta info to the workflow. SplitInBatches** – Prevents overloading websites by scraping in controlled groups. ScrapeGraphAI** – Performs the actual web scraping using an AI-assisted parser. Merge** – Merges data streams from multiple batches into one. If (Check Redis)** – Filters out already-processed articles using Redis SET membership. Set** – Shapes output into a user-friendly Telegram message. Telegram** – Delivers compliance alerts to stakeholders in real time. Redis** – Persists article IDs to avoid duplicate notifications. Sticky Note** – Contains usage tips directly on the canvas. Data Flow: Manual Trigger → Code (Define Sources) → SplitInBatches → ScrapeGraphAI ScrapeGraphAI → Merge → If (Check Redis) If (true) → Set → Telegram → Redis Customization Examples Change industries or keywords // Code node snippet return [ { url: "https://regulator.gov/energy-updates", industry: "Energy", keywords: ["renewable", "grid", "tariff"] }, { url: "https://financewatch.gov/financial-rules", industry: "Finance", keywords: ["AML", "KYC", "cryptocurrency"] } ]; Modify Telegram message formatting // Set node “Parameters → Value” items[0].json.message = 🛡️ ${$json.industry} Regulation Update\n\n${$json.title}\n${$json.date}\n${$json.url}; return items; Data Output Format The workflow outputs structured JSON data: { "title": "EU Proposes New ESG Disclosure Rules", "date": "2024-04-18", "url": "https://europa.eu/legal/eu-proposes-esg-disclosure", "industry": "Finance" } Troubleshooting Common Issues Empty scraped data – Verify CSS selectors/XPath in the ScrapeGraphAI node; website structure may have changed. Duplicate alerts – Ensure Redis credentials point to the same DB across nodes; otherwise IDs are not shared. Performance Tips Limit SplitInBatches to 2-3 URLs at a time if sites implement rate limiting. Use environment variables for credentials to simplify migration between stages. Pro Tips: Combine this workflow with n8n’s Error Trigger to log failures to Slack or email. Maintain a CSV of source URLs in Google Sheets and fetch it dynamically via the Google Sheets node. Pair with the Webhook node to let team members add new sources on the fly.