by David Ashby
🛠️ Pipedrive Tool MCP Server Complete MCP server exposing all Pipedrive Tool operations to AI agents. Zero configuration needed - all 45 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Pipedrive Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Pipedrive Tool tool with full error handling 📋 Available Operations (45 total) Every possible Pipedrive Tool operation is included: 🔧 Activity (5 operations) • Create an activity • Delete an activity • Get an activity • Get many activities • Update an activity 💰 Deal (7 operations) • Create a deal • Delete a deal • Duplicate a deal • Get a deal • Get many deals • Search a deal • Update a deal 🔧 Dealactivity (1 operations) • Get many deal activities 🔧 Dealproduct (4 operations) • Add a deal product • Get many deal products • Remove a deal product • Update a deal product 📄 File (5 operations) • Create a file • Delete a file • Download a file • Get a file • update details of a file 🔧 Lead (5 operations) • Create a lead • Delete a lead • Get a lead • Get many leads • Update a lead 🔧 Note (5 operations) • Create a note • Delete a note • Get a note • Get many notes • Update a note 🏢 Organization (6 operations) • Create an organization • Delete an organization • Get an organization • Get many organizations • Search an organization • Update an organization 👥 Person (6 operations) • Create a person • Delete a person • Get a person • Get many people • Search a person • Update a person 🔧 Product (1 operations) • Get many products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Pipedrive Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Pipedrive Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing 23 AWS Budgets API operations to AI agents. ⚡ Quick Setup Import this workflow into your n8n instance Credentials Add AWS Budgets credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Budgets API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://budgets.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (23 total) 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudget: Creates a budget and, if included, notifications and subscribers. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudgetAction: Creates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createnotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateNotification: Creates a notification. You must create the budget before you create the associated notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createsubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateSubscriber: Creates a subscriber. You must create the associated budget and notification before you create the subscriber. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudget: Deletes a budget. You can delete your budget at any time. <important> Deleting a budget also deletes the notifications and subscribers that are associated with that budget. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudgetAction: Deletes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteNotification: Deletes a notification. <important> Deleting a notification also deletes the subscribers that are associated with the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteSubscriber: Deletes a subscriber. <important> Deleting the last subscriber to a notification also deletes the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudget: Describes a budget. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetAction: Describes a budget action detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionhistories (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionHistories: Describes a budget action history detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForAccount: Describes all of the budget actions for an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForBudget: Describes all of the budget actions for a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetnotificationsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetNotificationsForAccount: Lists the budget names and notifications that are associated with an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetperformancehistory (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetPerformanceHistory: Describes the history for DAILY, MONTHLY, and QUARTERLY budgets. Budget history isn't available for ANNUAL budgets. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgets (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgets: Lists the budgets that are associated with an account. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describenotificationsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeNotificationsForBudget: Lists the notifications that are associated with a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describesubscribersfornotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeSubscribersForNotification: Lists the subscribers that are associated with a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Executebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.ExecuteBudgetAction: Executes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudget: Updates a budget. You can change every part of a budget except for the budgetName and the calculatedSpend. When you modify a budget, the calculatedSpend drops to zero until Amazon Web Services has new usage data to use for forecasting. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudgetAction: Updates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateNotification: Updates a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateSubscriber: Updates a subscriber. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Budgets API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 21 api.clarify.io API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add api.clarify.io credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the api.clarify.io API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.clarify.io/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (21 total) 🔧 V1 (21 endpoints) • GET /v1/bundles: Add Media to Track • POST /v1/bundles: Create a bundle • DELETE /v1/bundles/{bundle_id}: Delete a bundle • GET /v1/bundles/{bundle_id}: Get a bundle • PUT /v1/bundles/{bundle_id}: Update a bundle • GET /v1/bundles/{bundle_id}/insights: Get bundle insights • POST /v1/bundles/{bundle_id}/insights: Request an insight to be run • GET /v1/bundles/{bundle_id}/insights/{insight_id}: Get bundle insight • DELETE /v1/bundles/{bundle_id}/metadata: Delete bundle metadata • GET /v1/bundles/{bundle_id}/metadata: Get bundle metadata • PUT /v1/bundles/{bundle_id}/metadata: Update bundle metadata • DELETE /v1/bundles/{bundle_id}/tracks: Delete bundle tracks • GET /v1/bundles/{bundle_id}/tracks: Get bundle tracks • POST /v1/bundles/{bundle_id}/tracks: Add a track for a bundle • PUT /v1/bundles/{bundle_id}/tracks: Update a tracks for a bundle • DELETE /v1/bundles/{bundle_id}/tracks/{track_id}: Delete a bundle track • GET /v1/bundles/{bundle_id}/tracks/{track_id}: Get bundle track • PUT /v1/bundles/{bundle_id}/tracks/{track_id}: Add media to a track • GET /v1/reports/scores: Generate Group Report • GET /v1/reports/trends: Generate Trends Report • GET /v1/search: Search Bundles 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native api.clarify.io API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 18 Bufferapp API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bufferapp credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bufferapp API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bufferapp.com/1/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (18 total) 🔧 Info (1 endpoints) • GET /info/configuration{mediaTypeExtension}: Get Configuration 🔧 Links (1 endpoints) • GET /links/shares{mediaTypeExtension}: Get Link Shares 🔧 Profiles (7 endpoints) • POST /profiles/{id}/schedules/update{mediaTypeExtension}: Update Profile Schedules • GET /profiles/{id}/schedules{mediaTypeExtension}: Get Profile Schedules • GET /profiles/{id}/updates/pending{mediaTypeExtension}: Get Pending Updates • POST /profiles/{id}/updates/reorder{mediaTypeExtension}: Reorder Profile Updates • GET /profiles/{id}/updates/sent{mediaTypeExtension}: Get Sent Updates • POST /profiles/{id}/updates/shuffle{mediaTypeExtension}: Shuffle Profile Updates • GET /profiles/{id}{mediaTypeExtension}: Get Profile Details 🔧 Profiles{Mediatypeextension} (1 endpoints) • GET /profiles{mediaTypeExtension}: List Profiles 🔧 Updates (7 endpoints) • POST /updates/create{mediaTypeExtension}: Create Status Update • POST /updates/{id}/destroy{mediaTypeExtension}: Delete Status Update • GET /updates/{id}/interactions{mediaTypeExtension}: Get Update Interactions • POST /updates/{id}/move_to_top{mediaTypeExtension}: Move Update to Top • POST /updates/{id}/share{mediaTypeExtension}: Share Update Now • POST /updates/{id}/update{mediaTypeExtension}: Edit Status Update • GET /updates/{id}{mediaTypeExtension}: Get Update Details 🔧 User{Mediatypeextension} (1 endpoints) • GET /user{mediaTypeExtension}: Get User Details 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bufferapp API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Yashraj singh sisodiya
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. ATS Resume Maker Workflow Explanation Aim The aim of the ATS Resume Maker according to JD workflow is to automate the creation of an ATS-friendly resume by tailoring a candidate’s resume to a specific job description (JD). It streamlines the process of aligning resume content with JD requirements, producing a professional, scannable PDF resume that can be stored in Google Drive. Goal The goal is to: Allow users to input their resume (text or PDF) and a JD (PDF) via a web form. Extract and merge the text from both inputs. Use AI to customize the resume, prioritizing JD keywords while maintaining the candidate’s truthful information. Generate a clean, ATS-optimized HTML resume and convert it to a downloadable PDF. Upload the final PDF to Google Drive for easy access. This ensures the resume is optimized for Applicant Tracking Systems (ATS), which are used by employers to screen resumes, by incorporating relevant keywords and maintaining a simple, scannable format. Requirements The workflow relies on specific components and configurations: n8n Platform**: The automation tool hosting the workflow. Node Requirements**: Form Trigger: A web form to collect user inputs (resume text/PDF, JD PDF). Process one binary file1: JavaScript to rename and organize PDF inputs. Extracting resume1: Extracts text from PDF files. Merge Resume + JD1: Combines resume and JD text into a single string. Customize resume1: Uses Perplexity AI to generate an ATS-friendly HTML resume. HTML format1: Cleans the HTML output by removing newlines. HTML3: Processes HTML for potential display or validation. HTML to PDF: Converts the HTML resume to a PDF file. Upload file: Saves the PDF to a specified Google Drive folder. Credentials**: CustomJS account for the HTML-to-PDF conversion API. Google Drive account for file uploads. Perplexity account for AI-driven resume customization. Input Requirements**: Resume (plain text or PDF). Job description (PDF). Output**: A tailored, ATS-friendly resume in PDF format, uploaded to Google Drive. API Usage The workflow integrates multiple APIs to achieve its functionality: Perplexity API*: Used in the *Customize resume1 node to leverage the sonar-reasoning model for generating an ATS-optimized HTML resume. The API processes the merged resume and JD text, aligning content with JD keywords while adhering to strict HTML and CSS guidelines (e.g., Arial font, no colors, single-column layout). [Ref: Workflow JSON] CustomJS API*: Used in the *HTML to PDF node to convert the cleaned HTML resume into a PDF file. This API ensures the resume is transformed into a downloadable format suitable for ATS systems. [Ref: Workflow JSON] Google Drive API*: Used in the *Upload file node to store the final PDF in a designated Google Drive folder (Resume folder in My Drive). This API handles secure file uploads using OAuth2 authentication. [Ref: Workflow JSON] These APIs are critical for AI-driven customization, PDF generation, and cloud storage, ensuring a seamless end-to-end process. HTML to PDF Conversion The HTML-to-PDF conversion is a key step in the workflow, handled by the HTML to PDF node: Process*: The node takes the cleaned HTML resume ($json.cleanedResponse) from the *HTML format1 node and uses the @custom-js/n8n-nodes-pdf-toolkit.html2Pdf node to convert it into a PDF. API*: Relies on the *CustomJS API for high-fidelity conversion, ensuring the PDF retains the ATS-friendly structure (e.g., no graphics, clear text hierarchy). Output*: A binary PDF file passed to the *Upload file node. Relevance**: This step ensures the resume is in a widely accessible format, suitable for downloading or sharing with employers. The use of a dedicated API aligns with industry practices for HTML-to-PDF conversion, as seen in services like PDFmyURL or PDFCrowd, which offer similar REST API capabilities for converting HTML to PDF with customizable layouts. Ref:,(https://pdfmyurl.com/) Download from Community Link The workflow does not explicitly include a community link for downloading the final PDF, but the Upload file node stores the PDF in Google Drive, making it accessible via a shared folder or link. To enable direct downloads: Workflow Summary The ATS Resume Maker according to JD workflow automates the creation of a tailored, ATS-friendly resume by: Collecting user inputs via a web form (Form Trigger). Processing and extracting text from PDFs (Process one binary file1, Extracting resume1). Merging and customizing the content using Perplexity AI (Merge Resume + JD1, Customize resume1). Formatting and converting the resume to PDF (HTML format1, HTML3, HTML to PDF). Uploading the PDF to Google Drive (Upload file). The workflow leverages APIs for AI processing, PDF conversion, and cloud storage, ensuring a professional output optimized for ATS systems. Community sharing can be enabled via Google Drive links or external platforms, as discussed in related web resources. Ref:,,(https://pdfmyurl.com/) Timestamp: 02:54 PM IST, Wednesday, August 20, 2025
by WeblineIndia
Gold vs Equity Performance Comparison Tracker with Visual Insights This automated n8n workflow evaluates the historical performance of gold against equity markets. It extracts daily price data from Google Sheets, calculates comparative returns and uses an AI agent to generate actionable investment insights. Finally, it creates a visual performance chart and emails a smartly formatted HTML report—triggering a high-priority alert if the performance gap exceeds a defined threshold. Quick Implementation Steps Import the Workflow: Upload the downloaded JSON file into your n8n workspace. Connect Credentials: Authenticate your Google Sheets, Gmail and Groq API accounts in their respective nodes. Map Your Data: Select your specific Google Sheet documents for both the Gold and Equity data fetching nodes. Set Your Parameters: Open the Set Analysis Parameters node to define your target date range and performance gap threshold. Execute: Click "Test Workflow" to generate and receive your first automated financial comparison report. What It Does This workflow acts as an automated financial analyst. It begins by pulling day-by-day pricing for two distinct assets—Gold and Equity—from standard Google Sheets. A custom script then merges this data, ensuring dates match up perfectly while filtering out any information outside of your target date window. Once the data is aligned, the workflow calculates the percentage returns for both assets and determines the exact performance difference. Instead of just presenting raw numbers, the workflow passes these calculated metrics to an advanced AI Agent powered by Llama-3. The AI is prompted to step into the role of an investment advisor, evaluating the numbers to declare a "winner," providing realistic market context and suggesting a strategic portfolio allocation (e.g., 60% Equity / 40% Gold) based strictly on the provided data. To wrap it all up, the system generates a dynamic line chart URL using QuickChart.io. It packages the chart, the raw numbers and the AI's written insights into a clean HTML email. If one asset drastically outperforms the other (based on a threshold you set), the system routes the email as a special "ALERT". Finally, it logs a summary of the report back into a fresh Google Sheet for long-term record keeping. Who’s It For This workflow is perfect for financial analysts, portfolio managers, wealth advisors, and self-directed investors who want to automate their market tracking. It is highly beneficial for teams that need consistent, data-backed comparative reporting without the manual labor of crunching spreadsheet numbers and drafting summaries every week. Requirements to Use This Workflow An active n8n instance (compatible with self-hosted version 2.1.5 or newer). A Google Workspace account to authenticate both Google Sheets and Gmail nodes. A Groq API account to power the Llama-3 language model for AI insights. A Google Sheet populated with daily historical prices for Gold and Equity. How It Works & Set Up 1. Define Your Analysis Scope Start at the Set Analysis Parameters node. Here, you will define the startDate, endDate and the threshold percentage. This threshold is the performance gap required to trigger an urgent alert rather than a standard report. 2. Ingest the Market Data The workflow branches into two Google Sheets nodes (Fetch Gold Prices and Fetch Equity Prices). You will need to select your Google account credentials and point these nodes to the specific worksheets containing your date and price columns. 3. Merge and Calculate The Merge Market Data node uses JavaScript to combine both data streams into a single timeline. The subsequent Calculate Performance Metrics node does the math, calculating the total percentage return for both assets over your chosen timeframe. 4. Generate AI Insights The Generate AI Investment Insights Langchain agent takes the calculated returns and sends them to the Groq language model. Make sure your Groq credentials are active in the attached Insights model node. The AI outputs a structured JSON response containing the market summary and allocation advice. 5. Charting and Delivery While the AI processes text, the Generate Chart node transforms the price arrays into a QuickChart visual. Everything is combined in the Generate Final Report node, which builds the HTML structure. Finally, the Check Performance Gap node decides whether to trigger the Send Report Email or the Send Alert Email. How To Customize Nodes Set Analysis Parameters:** Update this node before every manual run to target different weeks, months or quarters. Generate AI Investment Insights:** Open the system prompt options in this node to change the AI's "personality." You can ask it to be more conservative, aggressive or to focus strictly on macroeconomic trends. Generate Chart:** Open the JavaScript code in this node to customize the aesthetics. You can change line colors, adjust the line tension or switch the chart type from "line" to "bar". Email Nodes:** Customize the HTML body or change the target email addresses. You can add CCs or BCCs for broader team distribution. Add‑ons Slack / Discord Integration:** Swap the Gmail nodes for messaging app nodes to drop these reports directly into a company finance channel. Live Data APIs:** Replace the Google Sheets fetch nodes with direct HTTP requests to Yahoo Finance or Alpha Vantage to pull real-time market data on the fly. PDF Generation:** Add a tool to convert the generated HTML payload into a polished PDF document, making it easier to attach to client emails. Use Case Examples Weekly Wealth Management Reporting: Automatically send weekly asset comparison summaries to high-net-worth clients to keep them informed on their portfolio balances. Automated Wealth Plan Generator: Feed the AI's allocation advice from this workflow directly into a broader wealth-planning system to calculate user eligibility and adjust debt-to-equity ratios. Market Volatility Alerts: Run this workflow daily on a schedule. If safe-haven assets (Gold) suddenly spike in comparison to risk assets (Equity), your team receives an immediate warning to adjust trading strategies. Crypto vs. Traditional Markets: Repurpose the workflow by simply changing the input sheets to compare Bitcoin performance against traditional S&P 500 index funds. Real Estate vs. Stocks: Adjust the data sources to compare local housing market indices against stock market growth over a multi-year period. Troubleshooting Guide | Issue | Possible Cause | Solution | | :--- | :--- | :--- | | Workflow fails at "Fetch Prices" nodes | Google Sheets credentials expired or Sheet ID is incorrect. | Re-authenticate your Google OAuth2 credentials and ensure you have selected the correct document and sheet tab from the node dropdowns. | | "Invalid JSON from AI" error | The Groq LLM returned conversational text (like "Here is your data:") instead of raw JSON. | Open the Generate AI Investment Insights node and ensure the system prompt strictly demands "Output ONLY valid JSON." You may also need to adjust the temperature setting on the Llama model. | | Chart image is broken in email | The data arrays are empty or the QuickChart URL exceeded character limits. | Verify that the Merge Market Data node successfully matched dates for both assets. If comparing years of data, consider calculating weekly averages instead of daily to shorten the URL string. | | No emails are being received | Gmail node misconfigured or blocked by Google security. | Check the Gmail credential connection. Ensure the recipient email address is valid and check your spam folder. | | Google Sheets history not updating | The Store Report History node is mapping to the wrong column headers. | Ensure your destination Google Sheet has exact column headers for "Date", "Winner", "Summary" and "Report" as defined in the node's schema. | Need Help? Running into hurdles getting this workflow perfectly tuned for your specific financial datasets? Whether you need help configuring the Groq AI prompts, adjusting the custom JavaScript parsing logic or building out more advanced add-ons like dynamic API integrations, our n8n automation team at WeblineIndia is here to assist. Feel free to reach out and contact WeblineIndia for expert n8n consultation. We can help you troubleshoot, customize and build the perfect automation architecture tailored to your exact business needs.
by Madame AI
Scrape Detailed GitHub Profiles to Google Sheets Using BrowserAct This template is a sophisticated data enrichment and reporting tool that scrapes detailed GitHub user profiles and organizes the information into dedicated, structured reports within a Google Sheet. This workflow is essential for technical recruiters, talent acquisition teams, and business intelligence analysts who need to dive deep into a pre-qualified list of developers to understand their recent activity, repositories, and technical footprint. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be started by a Schedule Trigger or by integrating directly with a candidate sourcing workflow (like the "Source Top GitHub Contributors" template). A Google Sheets node reads a list of target GitHub user profile URLs from a master candidate sheet. The Loop Over Items node processes each user one by one. A Slack notification is sent at the beginning of the loop to announce that the scraping process has started for the user. A BrowserAct node visits the user's GitHub profile URL and scrapes all available data, including profile info, repositories, and social links. A custom Code node (labeled "Code in JavaScript") performs a critical task: it cleans, fixes, and consolidates the complex, raw scraped data into a single, clean JSON object. The workflow then dynamically manages your output. It creates a new sheet dedicated to the user (named after them) and clears it to ensure a fresh report every time. The consolidated data is separated into three paths: main profile data, links, and repositories. Three final Google Sheets nodes then append the structured data to the user's dedicated sheet, creating a clear, multi-section report (User Data, User Links, User Repositories). Requirements BrowserAct** API account for web scraping BrowserAct* "Scraping GitHub Users Activity & Data*" Template BrowserAct* "* Source Top GitHub Contributors by Language & Location**" Template Output BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for input (candidate list) and structured output (individual user sheets) Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase GitHub Data Mining: Extracting User Profiles & Repositories with N8N
by Ranjan Dailata
This workflow automatically scrapes Amazon price-drop data via Decodo, extracts structured product details with OpenAI, generates summaries and sentiment insights for each item, and saves everything to Google Sheets — creating a fully automated price-intelligence pipeline. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for This workflow is designed for e-commerce analysts, product researchers, price-tracking teams, and affiliate marketers who want to: Monitor daily Amazon product price drops automatically. Extract key information such as product name, price, discount, and links. Generate AI-driven summaries and sentiment insights on the latest deals. Store all structured data directly in Google Sheets for trend analysis and reporting. What problem this workflow solves This workflow solves the following: Eliminates the need for manual data scraping or tracking. Turns unstructured web data into structured datasets. Adds AI-generated summaries and sentiment analysis for smarter decision-making. Enables automated, daily price intelligence tracking across multiple product categories. What this workflow does This automation combines Decodo’s web scraping, OpenAI GPT-4.1-mini, and Google Sheets to deliver an end-to-end price intelligence system. Trigger & Setup Manually start the workflow. Input your price-drop URL (default: CamelCamelCamel Daily Drops). Web Scraping via Decodo Decodo scrapes the Amazon price-drop listings and extracts product details (title, price, savings, product link). LLM-Powered Data Structuring The extracted content is sent to OpenAI GPT-4.1-mini to format and clean the output into structured JSON fields. Loop & Deep Analysis Each product URL is revisited by Decodo for content enrichment. The AI performs two analyses per product: Summarization: Generates a comprehensive summary of the product. Sentiment Analysis: Detects tone (positive/neutral/negative), sentiment score, and key topics. Aggregation & Storage All enriched results are merged and aggregated. Structured data is automatically appended to a connected Google Sheet. End Result: A ready-to-use dataset showing each price-dropped product, its summary, sentiment polarity, and key highlights updated in real time. Setup Pre-requisite Please make sure to install the n8n custom node for Decodo. Import and Connect Credentials Import the workflow into your n8n self-hosted instance. Connect: OpenAI API (GPT-4.1-mini)** → for summarization and sentiment analysis Decodo API** → for real-time price-drop scraping Google Sheets OAuth2** → to save structured results Configure Input Fields In the “Set input fields” node: Update the price_drop_url to your target URL (e.g., https://camelcamelcamel.com/top_drops?t=weekly). Run the Workflow Click “Execute Workflow” or schedule it to run daily to automatically fetch and analyze new price-drop listings. Check Output The aggregated data is saved to a Google Sheet (Pricedrop Info). Each record contains: Product name Current price and savings Product link AI-generated summary Sentiment classification and score How to customize this workflow Change Source Replace the price_drop_url with another CamelCamelCamel or Amazon Deals URL. Add multiple URLs and loop through them for category-based price tracking. Modify Extraction Schema In the Structured Output Parser, modify the JSON schema to include fields like: category, brand, rating, or availability. Tune AI Prompts Edit the Summarize Content and Sentiment Analysis nodes to: Add tone analysis (e.g., promotional vs. factual). Include competitive product comparison. Integrate More Destinations Replace Google Sheets with: Airtable → for no-code dashboards. PostgreSQL/MySQL → for large-scale storage. Notion or Slack → for instant price-drop alerts. Automate Scheduling Add a Cron Trigger node to run this workflow daily or hourly. Summary This workflow creates a fully automated price intelligence system that: Scrapes Amazon product price drops via Decodo. Extracts structured data with OpenAI GPT-4.1-mini. Generates AI-powered summaries and sentiment insights. Updates a connected Google Sheet with each run.
by Jose Castillo
This workflow scrapes Google Maps business listings (e.g., carpenters in Tarragona) to extract websites and email addresses — perfect for lead generation, local business prospecting, or agency outreach. 🔧 How it works Manual Trigger – start manually using the “Test Workflow” button. Scrape Google Maps – fetches the HTML from a Google Maps search URL. Extract URLs – parses all business links from the page. Filter Google URLs – removes unwanted Google/tracking links. Remove Duplicates + Limit – keeps unique websites (default: 100). Scrape Site – fetches each website’s HTML. Extract Emails – detects valid email addresses. Filter Out Empties & Split Out – isolates each valid email per site. (Optional) Add to Google Sheet – appends results to your Sheet. 💼 Use cases Local business leads: find emails of carpenters, dentists, gyms, etc., in your city. Agency outreach: collect websites and contact emails to pitch marketing services. B2B prospecting: identify businesses by niche and region for targeted campaigns. 🧩 Requirements n8n instance with HTTP Request and Code nodes enabled. (Optional) Google Sheets OAuth2 credentials. Tip: Add a “Google Sheets → Append Row” node and connect it to your account. 🔒 Security No personal or sensitive data included — only credential references. If sharing this workflow, anonymize the “credentials” field before publishing.
by Jitesh Dugar
Talent Sovereign: AI-Powered Resume Screener & Recruitment Hub 🎯 Description This is an elite enterprise-grade solution for Talent Acquisition and HR Ops teams. It automates the high-volume task of resume screening by transforming unstructured PDF applications into structured candidate profiles. Leveraging an advanced PDF-to-JSON parsing engine and a multi-factor scoring algorithm, it ensures only the highest-quality candidates reach your CRM while maintaining a professional feedback loop for all applicants. ✨ The Sovereign Lifecycle Intelligent Intake & Validation - Monitors Gmail for new submissions. A pre-validation layer ensures only healthy PDF binaries under 10MB enter the stream, filtering out noise and irrelevant attachments. Atomic Data Extraction - Utilizes the HTML to PDF (Parse PDF to JSON) node to decompose resumes into structured text data. Advanced AI Resume Parser - A sophisticated Code Node acts as a virtual Recruiter. It extracts contact info, LinkedIn URLs, and maps 45+ specific skills across 7 categories (Programming, Cloud, Data, etc.). It even calculates "Total Years of Experience" by analyzing date ranges within the text. Multi-Factor Scoring & Tiering - Candidates are automatically ranked on a 100-point scale: A+ Tier (90+): Exceptional talent; priority alerts. Qualified (70+): Standard qualified candidates. Below Threshold: Automatically prepared for the rejection track. Smart Routing Matrix - Green Track: Qualified leads are created in HubSpot CRM, archived in a "Qualified" Google Drive folder, and announced via Slack. Red Track: Unqualified candidates receive a personalized Gmail rejection including constructive feedback on skills they could improve. Closed-Loop Analytics - Logs every data point to PostgreSQL, calculating funnel metrics such as skill-match percentages and processing latency for continuous hiring strategy optimization. 💡 Key Technical Features Heuristic Skill Detection:** Uses NLP pattern matching to identify technical competencies even if they are phrased differently. Personalized Rejection Engine:** Automatically suggests specific skill areas (e.g., Cloud or Certifications) for candidates to work on based on what was missing from their resume. Forensic Archival:** Maintains a clean, searchable archive of all applicants in a hierarchical cloud structure. 🚀 Benefits ✅ 90% Faster Screening - Moves from manual reading to high-level candidate oversight instantly. ✅ Professional Employer Brand - Ensures every applicant receives a timely, personalized response. ✅ Data-Driven Hiring - Track exactly which sources and skill sets are performing best in your funnel. Tags: #recruitment #hr-tech #resume-parser #ai #hubspot #automation #pdf-to-json Category: Human Resources | Difficulty: Advanced
by Sergey Skorobogatov
📈 AI Stock Analytics & BCS "Profit" Social Network Publishing Workflow This workflow automatically generates stock market insights for selected tickers (e.g. GAZP, SBER, LKOH) using historical data, technical indicators, and an AI model. The results are then sent to Telegram for quick moderation and publishing. 🔑 What this workflow does Runs twice a day** on a schedule with a predefined list of tickers. Fetches historical market data** from a broker API. Calculates key technical indicators** (RSI, EMA/SMA, MACD, Bollinger Bands, ADX). Generates an investment post** (title + summary) using an LLM. Stores results** in a PostgreSQL database. Sends a draft post to Telegram* with inline buttons *“Publish” and “Retry”. Handles Telegram actions**: publishes the post to the final channel or re-runs the generation process. 📌 Key features Multi-ticker support in a single run. Automatic error handling (e.g. missing data or invalid AI JSON output). Human-in-the-loop moderation through Telegram before publishing. PostgreSQL integration for history and analytics storage. Flexible structure: easy to extend with new tickers, indicators, or publishing channels. 🛠️ Nodes used Trigger:** Schedule (twice daily) + Telegram Trigger (button callbacks). Data:** HTTP Request (broker API), Function (technical analysis calculations). AI:** OpenAI / OpenRouter with structured JSON output. Storage:** PostgreSQL (analytics history). Messaging:** Telegram (drafts and publishing). 🚀 Who is this for Fintech startups looking to automate market content. Investment bloggers posting daily stock analysis. Analysts experimenting with trading strategies on real market data.
by System Admin
Define URLs in array. curl -X POST https://api.firecrawl.dev/v1/scrape \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY' \ -d '{ "url": "https://docs.firecrawl.dev", ...