by Rahul Joshi
📊 Description Automate your HR onboarding process by transforming complex policy PDFs into friendly, structured onboarding videos using AI and HeyGen. 📄🎬 This workflow receives HR policy documents via webhook, extracts and simplifies the content with GPT-based AI, generates a natural script for a HeyGen avatar, renders the onboarding video, checks its status until completion, and finally uploads the finished video to Google Drive. Perfect for HR teams who want scalable, consistent, and engaging onboarding experiences without manual video production. ✨👥 🔁 What This Template Does 1️⃣ Receives an HR policy PDF through a webhook for processing. 🌐 2️⃣ Downloads the PDF and extracts readable text from it. 📄 3️⃣ Uses AI to simplify policy language into structured onboarding guidance. 🤖 4️⃣ Converts structured guidance into a friendly onboarding video script. 🗣️ 5️⃣ Sends the script to HeyGen to generate a video with avatar narration. 🎥 6️⃣ Repeatedly checks the HeyGen API until the video is complete. ⏳ 7️⃣ Downloads the completed video automatically. 📥 8️⃣ Uploads the final onboarding MP4 file into Google Drive. ☁️ 9️⃣ Returns the video file via webhook for further automation or client-side display. 🔁 ⭐ Key Benefits ✅ Converts dense HR documents into engaging onboarding videos ✅ Ensures consistency across all onboarding materials ✅ Reduces manual video scripting and editing workload ✅ Provides warm, friendly, employee-ready onboarding guidance ✅ Fully automated pipeline from PDF → AI script → HeyGen video → Drive ✅ Ideal for remote, hybrid, or fast-scaling HR teams 🧩 Features PDF ingestion via secure webhook Text extraction for accurate AI processing Two-stage AI workflow: policy simplification + script creation Structured JSON enforcement for reliable outputs HeyGen video generation with avatar narration Automated status polling loop Google Drive upload with dynamic file naming End-to-end error handling Webhook response with video delivery 🔐 Requirements Google Drive OAuth2 credentials HeyGen API key OpenAI API key (GPT-4.1-mini or GPT-4o required) Webhook endpoint for PDF uploads Valid avatar ID + voice ID for HeyGen 🎯 Target Audience HR teams onboarding new employees L&D (Learning & Development) teams Companies that want to modernize policy training Fast-growing startups needing scalable onboarding content Agencies creating onboarding videos for clients
by David Ashby
🛠️ Pipedrive Tool MCP Server Complete MCP server exposing all Pipedrive Tool operations to AI agents. Zero configuration needed - all 45 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Pipedrive Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Pipedrive Tool tool with full error handling 📋 Available Operations (45 total) Every possible Pipedrive Tool operation is included: 🔧 Activity (5 operations) • Create an activity • Delete an activity • Get an activity • Get many activities • Update an activity 💰 Deal (7 operations) • Create a deal • Delete a deal • Duplicate a deal • Get a deal • Get many deals • Search a deal • Update a deal 🔧 Dealactivity (1 operations) • Get many deal activities 🔧 Dealproduct (4 operations) • Add a deal product • Get many deal products • Remove a deal product • Update a deal product 📄 File (5 operations) • Create a file • Delete a file • Download a file • Get a file • update details of a file 🔧 Lead (5 operations) • Create a lead • Delete a lead • Get a lead • Get many leads • Update a lead 🔧 Note (5 operations) • Create a note • Delete a note • Get a note • Get many notes • Update a note 🏢 Organization (6 operations) • Create an organization • Delete an organization • Get an organization • Get many organizations • Search an organization • Update an organization 👥 Person (6 operations) • Create a person • Delete a person • Get a person • Get many people • Search a person • Update a person 🔧 Product (1 operations) • Get many products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Pipedrive Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Pipedrive Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing 23 AWS Budgets API operations to AI agents. ⚡ Quick Setup Import this workflow into your n8n instance Credentials Add AWS Budgets credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Budgets API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://budgets.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (23 total) 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudget: Creates a budget and, if included, notifications and subscribers. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudgetAction: Creates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createnotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateNotification: Creates a notification. You must create the budget before you create the associated notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createsubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateSubscriber: Creates a subscriber. You must create the associated budget and notification before you create the subscriber. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudget: Deletes a budget. You can delete your budget at any time. <important> Deleting a budget also deletes the notifications and subscribers that are associated with that budget. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudgetAction: Deletes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteNotification: Deletes a notification. <important> Deleting a notification also deletes the subscribers that are associated with the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteSubscriber: Deletes a subscriber. <important> Deleting the last subscriber to a notification also deletes the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudget: Describes a budget. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetAction: Describes a budget action detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionhistories (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionHistories: Describes a budget action history detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForAccount: Describes all of the budget actions for an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForBudget: Describes all of the budget actions for a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetnotificationsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetNotificationsForAccount: Lists the budget names and notifications that are associated with an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetperformancehistory (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetPerformanceHistory: Describes the history for DAILY, MONTHLY, and QUARTERLY budgets. Budget history isn't available for ANNUAL budgets. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgets (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgets: Lists the budgets that are associated with an account. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describenotificationsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeNotificationsForBudget: Lists the notifications that are associated with a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describesubscribersfornotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeSubscribersForNotification: Lists the subscribers that are associated with a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Executebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.ExecuteBudgetAction: Executes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudget: Updates a budget. You can change every part of a budget except for the budgetName and the calculatedSpend. When you modify a budget, the calculatedSpend drops to zero until Amazon Web Services has new usage data to use for forecasting. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudgetAction: Updates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateNotification: Updates a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateSubscriber: Updates a subscriber. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Budgets API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 21 api.clarify.io API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add api.clarify.io credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the api.clarify.io API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.clarify.io/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (21 total) 🔧 V1 (21 endpoints) • GET /v1/bundles: Add Media to Track • POST /v1/bundles: Create a bundle • DELETE /v1/bundles/{bundle_id}: Delete a bundle • GET /v1/bundles/{bundle_id}: Get a bundle • PUT /v1/bundles/{bundle_id}: Update a bundle • GET /v1/bundles/{bundle_id}/insights: Get bundle insights • POST /v1/bundles/{bundle_id}/insights: Request an insight to be run • GET /v1/bundles/{bundle_id}/insights/{insight_id}: Get bundle insight • DELETE /v1/bundles/{bundle_id}/metadata: Delete bundle metadata • GET /v1/bundles/{bundle_id}/metadata: Get bundle metadata • PUT /v1/bundles/{bundle_id}/metadata: Update bundle metadata • DELETE /v1/bundles/{bundle_id}/tracks: Delete bundle tracks • GET /v1/bundles/{bundle_id}/tracks: Get bundle tracks • POST /v1/bundles/{bundle_id}/tracks: Add a track for a bundle • PUT /v1/bundles/{bundle_id}/tracks: Update a tracks for a bundle • DELETE /v1/bundles/{bundle_id}/tracks/{track_id}: Delete a bundle track • GET /v1/bundles/{bundle_id}/tracks/{track_id}: Get bundle track • PUT /v1/bundles/{bundle_id}/tracks/{track_id}: Add media to a track • GET /v1/reports/scores: Generate Group Report • GET /v1/reports/trends: Generate Trends Report • GET /v1/search: Search Bundles 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native api.clarify.io API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 18 Bufferapp API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bufferapp credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bufferapp API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bufferapp.com/1/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (18 total) 🔧 Info (1 endpoints) • GET /info/configuration{mediaTypeExtension}: Get Configuration 🔧 Links (1 endpoints) • GET /links/shares{mediaTypeExtension}: Get Link Shares 🔧 Profiles (7 endpoints) • POST /profiles/{id}/schedules/update{mediaTypeExtension}: Update Profile Schedules • GET /profiles/{id}/schedules{mediaTypeExtension}: Get Profile Schedules • GET /profiles/{id}/updates/pending{mediaTypeExtension}: Get Pending Updates • POST /profiles/{id}/updates/reorder{mediaTypeExtension}: Reorder Profile Updates • GET /profiles/{id}/updates/sent{mediaTypeExtension}: Get Sent Updates • POST /profiles/{id}/updates/shuffle{mediaTypeExtension}: Shuffle Profile Updates • GET /profiles/{id}{mediaTypeExtension}: Get Profile Details 🔧 Profiles{Mediatypeextension} (1 endpoints) • GET /profiles{mediaTypeExtension}: List Profiles 🔧 Updates (7 endpoints) • POST /updates/create{mediaTypeExtension}: Create Status Update • POST /updates/{id}/destroy{mediaTypeExtension}: Delete Status Update • GET /updates/{id}/interactions{mediaTypeExtension}: Get Update Interactions • POST /updates/{id}/move_to_top{mediaTypeExtension}: Move Update to Top • POST /updates/{id}/share{mediaTypeExtension}: Share Update Now • POST /updates/{id}/update{mediaTypeExtension}: Edit Status Update • GET /updates/{id}{mediaTypeExtension}: Get Update Details 🔧 User{Mediatypeextension} (1 endpoints) • GET /user{mediaTypeExtension}: Get User Details 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bufferapp API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Yashraj singh sisodiya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ATS Resume Maker Workflow Explanation Aim The aim of the ATS Resume Maker according to JD workflow is to automate the creation of an ATS-friendly resume by tailoring a candidate’s resume to a specific job description (JD). It streamlines the process of aligning resume content with JD requirements, producing a professional, scannable PDF resume that can be stored in Google Drive. Goal The goal is to: Allow users to input their resume (text or PDF) and a JD (PDF) via a web form. Extract and merge the text from both inputs. Use AI to customize the resume, prioritizing JD keywords while maintaining the candidate’s truthful information. Generate a clean, ATS-optimized HTML resume and convert it to a downloadable PDF. Upload the final PDF to Google Drive for easy access. This ensures the resume is optimized for Applicant Tracking Systems (ATS), which are used by employers to screen resumes, by incorporating relevant keywords and maintaining a simple, scannable format. Requirements The workflow relies on specific components and configurations: n8n Platform**: The automation tool hosting the workflow. Node Requirements**: Form Trigger: A web form to collect user inputs (resume text/PDF, JD PDF). Process one binary file1: JavaScript to rename and organize PDF inputs. Extracting resume1: Extracts text from PDF files. Merge Resume + JD1: Combines resume and JD text into a single string. Customize resume1: Uses Perplexity AI to generate an ATS-friendly HTML resume. HTML format1: Cleans the HTML output by removing newlines. HTML3: Processes HTML for potential display or validation. HTML to PDF: Converts the HTML resume to a PDF file. Upload file: Saves the PDF to a specified Google Drive folder. Credentials**: CustomJS account for the HTML-to-PDF conversion API. Google Drive account for file uploads. Perplexity account for AI-driven resume customization. Input Requirements**: Resume (plain text or PDF). Job description (PDF). Output**: A tailored, ATS-friendly resume in PDF format, uploaded to Google Drive. API Usage The workflow integrates multiple APIs to achieve its functionality: Perplexity API*: Used in the *Customize resume1 node to leverage the sonar-reasoning model for generating an ATS-optimized HTML resume. The API processes the merged resume and JD text, aligning content with JD keywords while adhering to strict HTML and CSS guidelines (e.g., Arial font, no colors, single-column layout). [Ref: Workflow JSON] CustomJS API*: Used in the *HTML to PDF node to convert the cleaned HTML resume into a PDF file. This API ensures the resume is transformed into a downloadable format suitable for ATS systems. [Ref: Workflow JSON] Google Drive API*: Used in the *Upload file node to store the final PDF in a designated Google Drive folder (Resume folder in My Drive). This API handles secure file uploads using OAuth2 authentication. [Ref: Workflow JSON] These APIs are critical for AI-driven customization, PDF generation, and cloud storage, ensuring a seamless end-to-end process. HTML to PDF Conversion The HTML-to-PDF conversion is a key step in the workflow, handled by the HTML to PDF node: Process*: The node takes the cleaned HTML resume ($json.cleanedResponse) from the *HTML format1 node and uses the @custom-js/n8n-nodes-pdf-toolkit.html2Pdf node to convert it into a PDF. API*: Relies on the *CustomJS API for high-fidelity conversion, ensuring the PDF retains the ATS-friendly structure (e.g., no graphics, clear text hierarchy). Output*: A binary PDF file passed to the *Upload file node. Relevance**: This step ensures the resume is in a widely accessible format, suitable for downloading or sharing with employers. The use of a dedicated API aligns with industry practices for HTML-to-PDF conversion, as seen in services like PDFmyURL or PDFCrowd, which offer similar REST API capabilities for converting HTML to PDF with customizable layouts. Ref:,(https://pdfmyurl.com/) Download from Community Link The workflow does not explicitly include a community link for downloading the final PDF, but the Upload file node stores the PDF in Google Drive, making it accessible via a shared folder or link. To enable direct downloads: Workflow Summary The ATS Resume Maker according to JD workflow automates the creation of a tailored, ATS-friendly resume by: Collecting user inputs via a web form (Form Trigger). Processing and extracting text from PDFs (Process one binary file1, Extracting resume1). Merging and customizing the content using Perplexity AI (Merge Resume + JD1, Customize resume1). Formatting and converting the resume to PDF (HTML format1, HTML3, HTML to PDF). Uploading the PDF to Google Drive (Upload file). The workflow leverages APIs for AI processing, PDF conversion, and cloud storage, ensuring a professional output optimized for ATS systems. Community sharing can be enabled via Google Drive links or external platforms, as discussed in related web resources. Ref:,,(https://pdfmyurl.com/) Timestamp: 02:54 PM IST, Wednesday, August 20, 2025
by Madame AI
Scrape Detailed GitHub Profiles to Google Sheets Using BrowserAct This template is a sophisticated data enrichment and reporting tool that scrapes detailed GitHub user profiles and organizes the information into dedicated, structured reports within a Google Sheet. This workflow is essential for technical recruiters, talent acquisition teams, and business intelligence analysts who need to dive deep into a pre-qualified list of developers to understand their recent activity, repositories, and technical footprint. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be started by a Schedule Trigger or by integrating directly with a candidate sourcing workflow (like the "Source Top GitHub Contributors" template). A Google Sheets node reads a list of target GitHub user profile URLs from a master candidate sheet. The Loop Over Items node processes each user one by one. A Slack notification is sent at the beginning of the loop to announce that the scraping process has started for the user. A BrowserAct node visits the user's GitHub profile URL and scrapes all available data, including profile info, repositories, and social links. A custom Code node (labeled "Code in JavaScript") performs a critical task: it cleans, fixes, and consolidates the complex, raw scraped data into a single, clean JSON object. The workflow then dynamically manages your output. It creates a new sheet dedicated to the user (named after them) and clears it to ensure a fresh report every time. The consolidated data is separated into three paths: main profile data, links, and repositories. Three final Google Sheets nodes then append the structured data to the user's dedicated sheet, creating a clear, multi-section report (User Data, User Links, User Repositories). Requirements BrowserAct** API account for web scraping BrowserAct* "Scraping GitHub Users Activity & Data*" Template BrowserAct* "* Source Top GitHub Contributors by Language & Location**" Template Output BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for input (candidate list) and structured output (individual user sheets) Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase GitHub Data Mining: Extracting User Profiles & Repositories with N8N
by Ranjan Dailata
This workflow automatically scrapes Amazon price-drop data via Decodo, extracts structured product details with OpenAI, generates summaries and sentiment insights for each item, and saves everything to Google Sheets — creating a fully automated price-intelligence pipeline. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for This workflow is designed for e-commerce analysts, product researchers, price-tracking teams, and affiliate marketers who want to: Monitor daily Amazon product price drops automatically. Extract key information such as product name, price, discount, and links. Generate AI-driven summaries and sentiment insights on the latest deals. Store all structured data directly in Google Sheets for trend analysis and reporting. What problem this workflow solves This workflow solves the following: Eliminates the need for manual data scraping or tracking. Turns unstructured web data into structured datasets. Adds AI-generated summaries and sentiment analysis for smarter decision-making. Enables automated, daily price intelligence tracking across multiple product categories. What this workflow does This automation combines Decodo’s web scraping, OpenAI GPT-4.1-mini, and Google Sheets to deliver an end-to-end price intelligence system. Trigger & Setup Manually start the workflow. Input your price-drop URL (default: CamelCamelCamel Daily Drops). Web Scraping via Decodo Decodo scrapes the Amazon price-drop listings and extracts product details (title, price, savings, product link). LLM-Powered Data Structuring The extracted content is sent to OpenAI GPT-4.1-mini to format and clean the output into structured JSON fields. Loop & Deep Analysis Each product URL is revisited by Decodo for content enrichment. The AI performs two analyses per product: Summarization: Generates a comprehensive summary of the product. Sentiment Analysis: Detects tone (positive/neutral/negative), sentiment score, and key topics. Aggregation & Storage All enriched results are merged and aggregated. Structured data is automatically appended to a connected Google Sheet. End Result: A ready-to-use dataset showing each price-dropped product, its summary, sentiment polarity, and key highlights updated in real time. Setup Pre-requisite Please make sure to install the n8n custom node for Decodo. Import and Connect Credentials Import the workflow into your n8n self-hosted instance. Connect: OpenAI API (GPT-4.1-mini)** → for summarization and sentiment analysis Decodo API** → for real-time price-drop scraping Google Sheets OAuth2** → to save structured results Configure Input Fields In the “Set input fields” node: Update the price_drop_url to your target URL (e.g., https://camelcamelcamel.com/top_drops?t=weekly). Run the Workflow Click “Execute Workflow” or schedule it to run daily to automatically fetch and analyze new price-drop listings. Check Output The aggregated data is saved to a Google Sheet (Pricedrop Info). Each record contains: Product name Current price and savings Product link AI-generated summary Sentiment classification and score How to customize this workflow Change Source Replace the price_drop_url with another CamelCamelCamel or Amazon Deals URL. Add multiple URLs and loop through them for category-based price tracking. Modify Extraction Schema In the Structured Output Parser, modify the JSON schema to include fields like: category, brand, rating, or availability. Tune AI Prompts Edit the Summarize Content and Sentiment Analysis nodes to: Add tone analysis (e.g., promotional vs. factual). Include competitive product comparison. Integrate More Destinations Replace Google Sheets with: Airtable → for no-code dashboards. PostgreSQL/MySQL → for large-scale storage. Notion or Slack → for instant price-drop alerts. Automate Scheduling Add a Cron Trigger node to run this workflow daily or hourly. Summary This workflow creates a fully automated price intelligence system that: Scrapes Amazon product price drops via Decodo. Extracts structured data with OpenAI GPT-4.1-mini. Generates AI-powered summaries and sentiment insights. Updates a connected Google Sheet with each run.
by Jose Castillo
This workflow scrapes Google Maps business listings (e.g., carpenters in Tarragona) to extract websites and email addresses — perfect for lead generation, local business prospecting, or agency outreach. 🔧 How it works Manual Trigger – start manually using the “Test Workflow” button. Scrape Google Maps – fetches the HTML from a Google Maps search URL. Extract URLs – parses all business links from the page. Filter Google URLs – removes unwanted Google/tracking links. Remove Duplicates + Limit – keeps unique websites (default: 100). Scrape Site – fetches each website’s HTML. Extract Emails – detects valid email addresses. Filter Out Empties & Split Out – isolates each valid email per site. (Optional) Add to Google Sheet – appends results to your Sheet. 💼 Use cases Local business leads: find emails of carpenters, dentists, gyms, etc., in your city. Agency outreach: collect websites and contact emails to pitch marketing services. B2B prospecting: identify businesses by niche and region for targeted campaigns. 🧩 Requirements n8n instance with HTTP Request and Code nodes enabled. (Optional) Google Sheets OAuth2 credentials. Tip: Add a “Google Sheets → Append Row” node and connect it to your account. 🔒 Security No personal or sensitive data included — only credential references. If sharing this workflow, anonymize the “credentials” field before publishing.
by Sergey Skorobogatov
📈 AI Stock Analytics & BCS "Profit" Social Network Publishing Workflow This workflow automatically generates stock market insights for selected tickers (e.g. GAZP, SBER, LKOH) using historical data, technical indicators, and an AI model. The results are then sent to Telegram for quick moderation and publishing. 🔑 What this workflow does Runs twice a day** on a schedule with a predefined list of tickers. Fetches historical market data** from a broker API. Calculates key technical indicators** (RSI, EMA/SMA, MACD, Bollinger Bands, ADX). Generates an investment post** (title + summary) using an LLM. Stores results** in a PostgreSQL database. Sends a draft post to Telegram* with inline buttons *“Publish” and “Retry”. Handles Telegram actions**: publishes the post to the final channel or re-runs the generation process. 📌 Key features Multi-ticker support in a single run. Automatic error handling (e.g. missing data or invalid AI JSON output). Human-in-the-loop moderation through Telegram before publishing. PostgreSQL integration for history and analytics storage. Flexible structure: easy to extend with new tickers, indicators, or publishing channels. 🛠️ Nodes used Trigger:** Schedule (twice daily) + Telegram Trigger (button callbacks). Data:** HTTP Request (broker API), Function (technical analysis calculations). AI:** OpenAI / OpenRouter with structured JSON output. Storage:** PostgreSQL (analytics history). Messaging:** Telegram (drafts and publishing). 🚀 Who is this for Fintech startups looking to automate market content. Investment bloggers posting daily stock analysis. Analysts experimenting with trading strategies on real market data.
by vinci-king-01
Public Transport Schedule & Delay Tracker with Microsoft Teams and Dropbox ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Microsoft Teams incoming webhook configured Dropbox account with an app token created Public transit data source (website or API) that is legally scrapable or offers open data Required Credentials ScrapeGraphAI API Key** – enables web scraping Microsoft Teams Webhook URL** – posts messages into a channel Dropbox Access Token** – saves JSON files to Dropbox Specific Setup Requirements | Item | Example | Notes | |------|---------|-------| | Transit URL(s) | https://mycitytransit.com/line/42 | Must return the schedule or service alert data you need | | Polling Interval | 5 min | Adjust via Cron node or external trigger | | Teams Channel | #commuter-updates | Create an incoming webhook in channel settings | How it works This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Key Steps: Webhook Trigger**: Starts the workflow (can be replaced with Cron for polling). Set Node**: Stores target route IDs, URLs, or API endpoints. SplitInBatches**: Processes multiple routes one after another to avoid rate limits. ScrapeGraphAI**: Scrapes each route page/API and returns structured schedule & alert data. Code Node (Normalize)**: Cleans & normalizes scraped fields (e.g., converts times to ISO). If Node (Delay Detected?)**: Compares live data vs. expected timetable to detect delays. Merge Node**: Combines route metadata with delay information. Microsoft Teams Node**: Sends alert message and rich card to the selected Teams channel. Dropbox Node**: Saves the full JSON snapshot to a dated folder for historical reference. StickyNote**: Documents the mapping between scraped fields and final JSON structure. Set up steps Setup Time: 15-25 minutes Clone or Import the JSON workflow into your n8n instance. Install ScrapeGraphAI community node if you haven’t already (Settings → Community Nodes). Open the Set node and enter your target routes or API endpoints (array of URLs/IDs). Configure ScrapeGraphAI: Add your API key in the node’s credentials section. Define CSS selectors or API fields inside the node parameters. Add Microsoft Teams credentials: Paste your channel’s incoming webhook URL into the Microsoft Teams node. Customize the message template (e.g., include route name, delay minutes, reason). Add Dropbox credentials: Provide the access token and designate a folder path (e.g., /TransitLogs/). Customize the If node logic to match your delay threshold (e.g., ≥5 min). Activate the workflow and trigger via the webhook URL, or add a Cron node (every 5 min). Node Descriptions Core Workflow Nodes: Webhook** – External trigger for on-demand checks or recurring scheduler. Set** – Defines static or dynamic variables such as route list and thresholds. SplitInBatches** – Iterates through each route to control request volume. ScrapeGraphAI** – Extracts live schedule and alert data from transit websites/APIs. Code (Normalize)** – Formats scraped data, merges dates, and calculates delay minutes. If (Delay Detected?)** – Branches the flow based on presence of delays. Merge** – Re-assembles metadata with computed delay results. Microsoft Teams** – Sends formatted notifications to Teams channels. Dropbox** – Archives complete JSON payloads for auditing and analytics. StickyNote** – Provides inline documentation for maintainers. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI → Code (Normalize) → If (Delay Detected?) ├─ true → Merge → Microsoft Teams → Dropbox └─ false → Dropbox Customization Examples Change to Slack instead of Teams // Replace Microsoft Teams node with Slack node { "text": 🚊 ${$json.route} is delayed by ${$json.delay} minutes., "channel": "#commuter-updates" } Filter only major delays (>10 min) // In If node, use: return $json.delay >= 10; Data Output Format The workflow outputs structured JSON data: { "route": "Line 42", "expected_departure": "2024-04-22T14:05:00Z", "actual_departure": "2024-04-22T14:17:00Z", "delay": 12, "status": "delayed", "reason": "Signal failure at Main Station", "scraped_at": "2024-04-22T13:58:22Z", "source_url": "https://mycitytransit.com/line/42" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Verify CSS selectors/API fields match the current website markup; update selectors after site redesigns. Teams messages not arriving – Ensure the Teams webhook URL is correct and the incoming webhook is still enabled. Dropbox writes fail – Check folder path, token scopes (files.content.write), and available storage quota. Performance Tips Limit SplitInBatches to 5-10 routes per run to avoid IP blocking. Cache unchanged schedules locally and fetch only alert pages for faster runs. Pro Tips: Use environment variables for API keys & webhook URLs to keep credentials secure. Attach a Cron node set to off-peak hours (e.g., 4 AM) for daily full-schedule backups. Add a Grafana dashboard that reads the Dropbox archive for long-term delay analytics.
by Cheng Siong Chin
How It Works This workflow automates document authenticity verification by combining AI-based content analysis with immutable blockchain records. It is built for compliance teams, legal departments, supply chain managers, and regulators who need tamper-proof validation and auditable proof. The solution addresses the challenge of detecting forged or altered documents while producing verifiable evidence that meets legal and regulatory standards. Documents are submitted via webhook and processed through PDF content extraction. Anthropic’s Claude analyzes the content for authenticity signals such as inconsistencies, anomalies, and formatting issues, returning structured authenticity scores. Verified documents trigger blockchain record creation and publication to a distributed ledger, with cryptographic proofs shared automatically with carriers and regulators through HTTP APIs. Setup Steps Configure webhook endpoint URL for document submission Add Anthropic API key to Chat Model node for AI Set up blockchain network credentials in HTTP nodes for record preparation Connect Gmail account and specify compliance team email addresses Customize authenticity thresholds Prerequisites Anthropic API key, blockchain network access and credentials Use Cases Supply chain documentation verification for import/export compliance Customization Adjust AI prompts for industry-specific authenticity criteria Benefits Eliminates manual document review time while improving fraud detection accuracy