by David Ashby
Complete MCP server exposing 15 BulkSMS JSON REST API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BulkSMS JSON REST API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BulkSMS JSON REST API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bulksms.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (15 total) 🔧 Blocked-Numbers (2 endpoints) • GET /blocked-numbers: Block Phone Number • POST /blocked-numbers: Create a blocked number 🔧 Credit (1 endpoints) • POST /credit/transfer: Transfer Account Credits 🔧 Messages (5 endpoints) • GET /messages: List Related Messages • POST /messages: Send Messages • GET /messages/send: Send message by simple GET or POST • GET /messages/{id}: Show Message • GET /messages/{id}/relatedReceivedMessages: List Related Messages 🔧 Profile (1 endpoints) • GET /profile: Retrieve User Profile 🔧 Rmm (1 endpoints) • POST /rmm/pre-sign-attachment: Generate Attachment Upload URL 🔧 Webhooks (5 endpoints) • GET /webhooks: Update Webhook Settings • POST /webhooks: Create a webhook • DELETE /webhooks/{id}: Delete a webhook • GET /webhooks/{id}: Read a webhook • POST /webhooks/{id}: Update a webhook 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BulkSMS JSON REST API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 9 NPR Listening Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Listening Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Listening Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://listening.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 V2 (9 endpoints) • GET /v2/aggregation/{aggId}/recommendations: Get a set of recommendations for an aggregation independent of the user's lis... • GET /v2/channels: List Available Channels • GET /v2/history: Get User Ratings History • GET /v2/organizations/{orgId}/categories/{category}/recommendations: Get a list of recommendations from a category of content from an organization • GET /v2/organizations/{orgId}/recommendations: Get a variety of details about an organization including various lists of rec... • GET /v2/promo/recommendations: Get Recent Promo Audio • POST /v2/ratings: Submit Media Ratings • GET /v2/recommendations: Get User Recommendations • GET /v2/search/recommendations: Get Search Recommendations 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Listening Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Matthew
Automated Cold Email Personalization This workflow automates the creation of highly personalized cold outreach emails by extracting lead data, scraping company websites, and leveraging AI to craft unique email components. This is ideal for sales teams, marketers, and business development professionals looking to scale their outreach efforts while maintaining a high degree of personalization. How It Works Generate Batches: The workflow starts by generating a sequence of numbers, defining how many leads to process in batches. Scrape Lead Data: It uses an external API (Apify) to pull comprehensive lead information, including contact details, company data, and social media links. Fetch Client Data: The workflow then retrieves relevant client details from your Google Sheet based on the scraped data. Scrape Company Website: The lead's company website is automatically scraped to gather content for personalization. Summarize Prospect Data: An OpenAI model analyzes both the scraped website content and the individual's profile data to create concise summaries and identify unique angles for outreach. Craft Personalized Email: A more advanced OpenAI model uses these summaries and specific instructions to generate the "icebreaker," "intro," and "value proposition" components of a personalized cold email. Update Google Sheet: Finally, these generated email components are saved back into your Google Sheet, enriching your lead records for future outreach. Google Sheet Structure Your Google Sheet must have the following exact column headers to ensure proper data flow: Email** (unique identifier for each lead) Full Name** Headline** LinkdIn** cityName** stateName** company/cityName** Country** Company Name** Website** company/businessIndustry** Keywords** icebreaker** (will be populated by the workflow) intro** (will be populated by the workflow) value\_prop** (will be populated by the workflow) Setup Instructions Add Credentials: In n8n, add your OpenAI API key via the Credentials menu. Connect your Google account via the Credentials menu for Google Sheets access. You will also need an Apify API key for the Scraper node. Configure Google Sheets Nodes: Select the Client data and Add email data to sheet nodes. For each, choose your Google Sheets credential, select your spreadsheet, and the specific sheet name. Ensure all column mappings are correct according to the "Google Sheet Structure" section above. Configure Apify Scraper Node: Select the Scraper node. Update the Authorization header with your Apify API token (Bearer KEY). In the JSON Body, set the searchUrl to your Apollo link (or equivalent source URL for lead data). Configure OpenAI Nodes: Select both Summarising prospect data and Creating detailed email nodes. Choose your OpenAI credential from the dropdown. In the Creating detailed email node's prompt, replace PUT YOUR COMPANY INFO HERE with your company's context and verify the target sector for the email generation. Verify Update Node: On the final Add email data to sheet node, ensure the Operation is set to Append Or Update and the Matching Columns field is set to Email. Customization Options 💡 Trigger: Change the When clicking 'Execute workflow' node to an automatic trigger, such as a **Cron node for daily runs, or a Google Sheets trigger when new rows are added. Lead Generation: Modify the **Code node to change the number of leads processed per run (currently set to 50). Scraping Logic**: Adjust the Scraper node's parameters (e.g., count) or replace the Apify integration with another data source if needed. AI Prompting: Experiment with the prompts in the **Summarising prospect data and Creating detailed email OpenAI nodes to refine the tone, style, length, or content focus of the generated summaries and emails. AI Models**: Test different OpenAI models (e.g., gpt-3.5-turbo, gpt-4o) in the OpenAI nodes to find the optimal balance between cost, speed, and output quality. Data Source/CRM**: Replace the Google Sheets nodes with integrations for your preferred CRM (e.g., HubSpot, Salesforce) or a database (e.g., PostgreSQL, Airtable) to manage your leads.
by Incrementors
Financial Insight Automation: Market Cap to Telegram via Bright Data 📊 Description An automated n8n workflow that scrapes financial data from Yahoo Finance using Bright Data, processes market cap information, generates visual charts, and sends comprehensive financial insights directly to Telegram for instant notifications. 🚀 How It Works This workflow operates through a simple three-zone process: 1. Data Input & Trigger User submits a keyword (e.g., "AI", "Crypto", "MSFT") through a form trigger that initiates the financial data collection process. 2. Data Scraping & Processing Bright Data API discovers and scrapes comprehensive financial data from Yahoo Finance, including market cap, stock prices, company profiles, and financial metrics. 3. Visualization & Delivery The system generates interactive market cap charts, saves data to Google Sheets for record-keeping, and sends visual insights to Telegram as PNG images. ⚡ Setup Steps > ⏱️ Estimated Setup Time: 15-20 minutes Prerequisites Active n8n instance (self-hosted or cloud) Bright Data account with Yahoo Finance dataset access Google account for Sheets integration Telegram bot token and chat ID Step 1: Import the Workflow Copy the provided JSON workflow code In n8n: Go to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Step 2: Configure Bright Data Integration Set up Bright Data Credentials: In n8n: Navigate to Credentials → + Add credential → HTTP Header Auth Add Authorization header with value: Bearer BRIGHT_DATA_API_KEY Replace BRIGHT_DATA_API_KEY with your actual API key Test the connection to ensure it works properly > Note: The workflow uses dataset ID gd_lmrpz3vxmz972ghd7 for Yahoo Finance data. Ensure you have access to this dataset in your Bright Data dashboard. Step 3: Set up Google Sheets Integration Create a Google Sheet: Go to Google Sheets and create a new spreadsheet Name it "Financial Data Tracker" or similar Copy the Sheet ID from the URL Configure Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update the workflow: Open the "📊 Filtered Output & Save to Sheet" node Replace YOUR_SHEET_ID with your actual Sheet ID Select your Google Sheets credential Step 4: Configure Telegram Bot Set up Telegram Integration: Create a Telegram bot using @BotFather Get your bot token and chat ID In n8n: Credentials → + Add credential → Telegram API Enter your bot token Update the "📤 Send Chart on Telegram" node with your chat ID Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 5: Test and Activate Test the workflow: Use the form trigger with a test keyword (e.g., "AAPL") Monitor the execution in n8n Verify data appears in Google Sheets Check for chart delivery on Telegram Activate the workflow: Turn on the workflow using the toggle switch The form trigger will be accessible via the provided webhook URL 📋 Key Features 🔍 Keyword-Based Discovery: Search companies by keyword, ticker, or industry 💰 Comprehensive Financial Data: Market cap, stock prices, earnings, and company profiles 📊 Visual Charts: Automatic generation of market cap comparison charts 📱 Telegram Integration: Instant delivery of insights to your mobile device 💾 Data Storage: Automatic backup to Google Sheets for historical tracking ⚡ Real-time Processing: Fast data retrieval and processing with Bright Data 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | Company Name | Full company name | "Apple Inc." | | Stock Ticker | Trading symbol | "AAPL" | | Market Cap | Total market capitalization | "$2.89T" | | Current Price | Latest stock price | "$189.25" | | Exchange | Stock exchange | "NASDAQ" | | Sector | Business sector | "Technology" | | PE Ratio | Price to earnings ratio | "28.45" | | 52 Week Range | Annual high and low prices | "$164.08 - $199.62" | 🔧 Troubleshooting Common Issues Bright Data Connection Failed: Verify your API key is correct and active Check dataset permissions in Bright Data dashboard Ensure you have sufficient credits Google Sheets Permission Denied: Re-authenticate Google Sheets OAuth Verify sheet sharing settings Check if the Sheet ID is correct Telegram Not Receiving Messages: Verify bot token and chat ID Check if bot is added to the chat Test Telegram credentials manually Performance Tips Use specific keywords for better data accuracy Monitor Bright Data usage to control costs Set up error handling for failed requests Consider rate limiting for high-volume usage 🎯 Use Cases Investment Research:** Quick financial analysis of companies and sectors Market Monitoring:** Track market cap changes and stock performance Competitive Analysis:** Compare financial metrics across companies Portfolio Management:** Monitor holdings and potential investments Financial Reporting:** Generate automated financial insights for teams 🔗 Additional Resources n8n Documentation Bright Data Datasets Google Sheets API Telegram Bot API For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by Wan Dinie
AI Website Analyzer to Product Ideas with FireCrawl and GPT-4.1 This n8n template demonstrates how to use AI to analyze any website and generate product ideas or summaries based on the website's content and purpose. Use cases are many: Try analyzing competitor websites, discovering product opportunities, understanding business models, or generating insights from landing pages! Good to know At time of writing, Firecrawl offers up to 500 free API calls. See Firecrawl Pricing for updated info. OpenAI API costs vary by model. GPT-3.5 is cheaper while GPT-4 and above offer deeper analysis but cost more per request. How it works We'll collect a website URL via a manual form trigger. The URL is sent to the Firecrawl API, which deeply crawls and analyzes the website content. Firecrawl returns the scraped data, including page structure, content, and metadata. The scraped data is then sent to OpenAI's API with a custom prompt. OpenAI generates an AI-powered summary analyzing what the website is doing, its purpose, and potential product ideas. The final output is displayed or can be stored for further use. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as webhook or even a form. You can analyze multiple URLs by looping through a list, but of course, the processing will take longer and cost more. Requirements Firecrawl API key (get free 500 calls at https://firecrawl.dev) OpenAI API key for AI analysis Valid website URLs to analyze Customizing this workflow Change the output format from HTML to JSON, Markdown, or plain text by editing the Firecrawl parameters. Modify the AI prompt to focus on specific aspects like pricing strategy, target audience, or UX analysis. Upgrade to GPT-4.1, GPT-5.1, or GPT-5.2 for more advanced and detailed analysis. Add a webhook trigger to analyze websites automatically from other apps or services. Store results in a database like Supabase or Google Sheets for tracking competitor analysis over time.
by vinci-king-01
Certification Requirement Tracker with Rocket.Chat and GitLab ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Pre-conditions/Requirements Prerequisites An n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed and activated Rocket.Chat workspace with Incoming Webhook or user credentials GitLab account with at least one repository and a Personal Access Token (PAT) Access URLs for all certification bodies or industry associations you want to monitor Required Credentials ScrapeGraphAI API Key** – Enables web scraping services Rocket.Chat Credentials** – Either: Webhook URL, or Username & Password / Personal Access Token GitLab Personal Access Token** – To create issues and comments via API Specific Setup Requirements | Service | Requirement | Example/Notes | | ------------- | ---------------------------------------------- | ---------------------------------------------------- | | Rocket.Chat | Incoming Webhook URL OR user credentials | https://chat.example.com/hooks/abc123… | | GitLab | Personal Access Token with api scope | Generate at Settings → Access Tokens | | ScrapeGraphAI | Domain whitelist (if running behind firewall) | Allow outbound HTTPS traffic to target sites | | Cron Schedule | Annual (default) or custom interval | 0 0 1 1 * for 1-Jan every year | How it works This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Key Steps: Scheduled Trigger**: Fires annually (or any chosen interval) to start the check. Set Node – URL List**: Stores an array of certification-body URLs to scrape. Split in Batches**: Iterates over each URL for parallel scraping. ScrapeGraphAI**: Extracts requirement text, effective dates, and renewal info. Code Node – Diff Checker**: Compares the newly scraped data with last year’s GitLab issue (if any) to detect changes. IF Node – Requirements Changed?**: Routes the flow based on change detection. GitLab – Create/Update Issue**: Opens a new issue or comments on an existing one with details of the change. Rocket.Chat – Notify Channel**: Sends a message summarizing any changes and linking to the GitLab issue. Merge Node**: Collects all branch results for a final summary report. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, navigate to Settings → Community Nodes and install “ScrapeGraphAI”. Add Credentials: a. In Credentials, create “ScrapeGraphAI API”. b. Add your Rocket.Chat Webhook or PAT. c. Add your GitLab PAT with api scope. Import Workflow: Copy the JSON template into n8n (Workflows → Import). Configure URL List: Open the Set – URL List node and replace the sample array with real certification URLs. Adjust Cron Expression: Double-click the Schedule Trigger node and set your desired frequency. Customize Rocket.Chat Channel: In the Rocket.Chat – Notify node, set the channel or use an incoming webhook. Run Once for Testing: Execute the workflow manually to ensure issues and notifications are created as expected. Activate Workflow: Toggle Activate so the schedule starts running automatically. Node Descriptions Core Workflow Nodes: stickyNote – Workflow Notes**: Contains a high-level diagram and documentation inside the editor. Schedule Trigger** – Initiates the yearly check. Set (URL List)** – Holds certification body URLs and meta info. SplitInBatches** – Iterates through each URL in manageable chunks. ScrapeGraphAI** – Scrapes each certification page and returns structured JSON. Code (Diff Checker)** – Compares the current scrape with historical data. If – Requirements Changed?** – Switches path based on diff result. GitLab** – Creates or updates issues, attaches JSON diff, sets labels (certification, renewal). Rocket.Chat** – Posts a summary message with links to the GitLab issue(s). Merge** – Consolidates batch results for final logging. Set (Success)** – Formats a concise success payload. Data Flow: Schedule Trigger → Set (URL List) → SplitInBatches → ScrapeGraphAI → Code (Diff Checker) → If → GitLab / Rocket.Chat → Merge Customization Examples Add Additional Metadata to GitLab Issue // Inside the GitLab "Create Issue" node ↗️ { "title": Certification Update: ${$json.domain}, "description": What's Changed?\n${$json.diff}\n\n_Last checked: {{$now}}_, "labels": "certification,compliance," + $json.industry } Customize Rocket.Chat Message Formatting // Rocket.Chat node → JSON parameters { "text": :bell: Certification Update Detected\n>${$json.domain}\n>See the GitLab issue: ${$json.issueUrl} } Data Output Format The workflow outputs structured JSON data: { "domain": "example-cert-body.org", "scrapeDate": "2024-01-01T00:00:00Z", "oldRequirements": "Original text …", "newRequirements": "Updated text …", "diff": "- Continuous education hours increased from 20 to 24\n- Fee changed to $200", "issueUrl": "https://gitlab.com/org/compliance/-/issues/42", "notification": "sent" } Troubleshooting Common Issues No data returned from ScrapeGraphAI – Confirm the target site is publicly accessible and not blocking bots. Whitelist the domain or add proper headers via ScrapeGraphAI options. GitLab issue not created – Check that the PAT has api scope and the project ID is correct in the GitLab node. Rocket.Chat message fails – Verify webhook URL or credentials and ensure the channel exists. Performance Tips Limit the batch size in SplitInBatches to avoid API rate limits. Schedule the workflow during off-peak hours to minimize load. Pro Tips: Store last-year scrapes in a dedicated GitLab repository to create a complete change log history. Use n8n’s built-in Execution History Pruning to keep the database slim. Add an Error Trigger workflow to notify you if any step fails.
by vinci-king-01
How it works This workflow automatically scrapes commercial real estate listings from LoopNet and sends opportunity alerts to Telegram while logging data to Google Sheets. Key Steps Scheduled Trigger - Runs every 24 hours to collect fresh CRE market data AI-Powered Scraping - Uses ScrapeGraphAI to extract property information from LoopNet Market Analysis - Analyzes listings for opportunities and generates market insights Smart Notifications - Sends Telegram alerts only when investment opportunities are found Data Logging - Stores daily market metrics in Google Sheets for trend analysis Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping Set up Telegram connection - Connect your Telegram bot and specify the target channel Configure Google Sheets - Set up Google Sheets integration for data logging Customize the LoopNet URL - Update the URL to target specific CRE markets or property types Adjust schedule - Modify the trigger timing based on your market monitoring needs Keep detailed configuration notes in sticky notes inside your workflow
by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Corentin Ribeyre
This template can be used as a real-time listening and processing of search results with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into two steps : A webhook node to link your Icypeas account with your n8n workflow. A set node to retrieve the relevant informations. Set up steps You will need a working icypeas account to run the workflow and you will have to paste the production URL provided by the n8n webhook node.
by Le Nguyen
PDF Invoice Extractor (AI) End-to-end pipeline: Watch Drive ➜ Download PDF ➜ OCR text ➜ AI normalize to JSON ➜ Upsert Buyer (Account) ➜ Create Opportunity ➜ Map Products ➜ Create OLI via Composite API ➜ Archive to OneDrive. Node by node (what it does & key setup) 1) Google Drive Trigger Purpose**: Fire when a new file appears in a specific Google Drive folder. Key settings**: Event: fileCreated Folder ID: google drive folder id Polling: everyMinute Creds: googleDriveOAuth2Api Output**: Metadata { id, name, ... } for the new file. 2) Download File From Google Purpose**: Get the file binary for processing and archiving. Key settings**: Operation: download File ID: ={{ $json.id }} Creds: googleDriveOAuth2Api Output**: Binary (default key: data) and original metadata. 3) Extract from File Purpose**: Extract text from PDF (OCR as needed) for AI parsing. Key settings**: Operation: pdf OCR: enable for scanned PDFs (in options) Output**: JSON with OCR text at {{ $json.text }}. 4) Message a model (AI JSON Extractor) Purpose: Convert OCR text into **strict normalized JSON array (invoice schema). Key settings**: Node: @n8n/n8n-nodes-langchain.openAi Model: gpt-4.1 (or gpt-4.1-mini) Message role: system (the strict prompt; references {{ $json.text }}) jsonOutput: true Creds: openAiApi Output (per item): $.message.content → the parsed **JSON (ensure it’s an array). 5) Create or update an account (Salesforce) Purpose: Upsert **Buyer as Account using an external ID. Key settings**: Resource: account Operation: upsert External Id Field: tax_id__c External Id Value: ={{ $json.message.content.buyer.tax_id }} Name: ={{ $json.message.content.buyer.name }} Creds: salesforceOAuth2Api Output: Account record (captures Id) for downstream **Opportunity. 6) Create an opportunity (Salesforce) Purpose**: Create Opportunity linked to the Buyer (Account). Key settings**: Resource: opportunity Name: ={{ $('Message a model').item.json.message.content.invoice.code }} Close Date: ={{ $('Message a model').item.json.message.content.invoice.issue_date }} Stage: Closed Won Amount: ={{ $('Message a model').item.json.message.content.summary.grand_total }} AccountId: ={{ $json.id }} (from Upsert Account output) Creds: salesforceOAuth2Api Output**: Opportunity Id for OLI creation. 7) Build SOQL (Code / JS) Purpose: Collect unique product **codes from AI JSON and build a SOQL query for PricebookEntry by Pricebook2Id. Key settings**: pricebook2Id (hardcoded in script): e.g., 01sxxxxxxxxxxxxxxx Source lines: $('Message a model').first().json.message.content.products Output**: { soql, codes } 8) Query PricebookEntries (Salesforce) Purpose**: Fetch PricebookEntry.Id for each Product2.ProductCode. Key settings**: Resource: search Query: ={{ $json.soql }} Creds: salesforceOAuth2Api Output**: Items with Id, Product2.ProductCode (used for mapping). 9) Code in JavaScript (Build OLI payloads) Purpose: Join lines with PBE results and Opportunity Id ➜ build **OpportunityLineItem payloads. Inputs**: OpportunityId: ={{ $('Create an opportunity').first().json.id }} Lines: ={{ $('Message a model').first().json.message.content.products }} PBE rows: from previous node items Output**: { body: { allOrNone:false, records:[{ OpportunityLineItem... }] } } Notes**: Converts discount_total ➜ per-unit if needed (currently commented for standard pricing). Throws on missing PBE mapping or empty lines. 10) Create Opportunity Line Items (HTTP Request) Purpose**: Bulk create OLIs via Salesforce Composite API. Key settings**: Method: POST URL: https://<your-instance>.my.salesforce.com/services/data/v65.0/composite/sobjects Auth: salesforceOAuth2Api (predefined credential) Body (JSON): ={{ $json.body }} Output**: Composite API results (per-record statuses). 11) Update File to One Drive Purpose: Archive the **original PDF in OneDrive. Key settings**: Operation: upload File Name: ={{ $json.name }} Parent Folder ID: onedrive folder id Binary Data: true (from the Download node) Creds: microsoftOneDriveOAuth2Api Output**: Uploaded file metadata. Data flow (wiring) Google Drive Trigger → Download File From Google Download File From Google → Extract from File → Update File to One Drive Extract from File → Message a model Message a model → Create or update an account Create or update an account → Create an opportunity Create an opportunity → Build SOQL Build SOQL → Query PricebookEntries Query PricebookEntries → Code in JavaScript Code in JavaScript → Create Opportunity Line Items Quick setup checklist 🔐 Credentials: Connect Google Drive, OneDrive, Salesforce, OpenAI. 📂 IDs: Drive Folder ID (watch) OneDrive Parent Folder ID (archive) Salesforce Pricebook2Id (in the JS SOQL builder) 🧠 AI Prompt: Use the strict system prompt; jsonOutput = true. 🧾 Field mappings: Buyer tax id/name → Account upsert fields Invoice code/date/amount → Opportunity fields Product name must equal your Product2.ProductCode in SF. ✅ Test: Drop a sample PDF → verify: AI returns array JSON only Account/Opportunity created OLI records created PDF archived to OneDrive Notes & best practices If PDFs are scans, enable OCR in Extract from File. If AI returns non-JSON, keep “Return only a JSON array” as the last line of the prompt and keep jsonOutput enabled. Consider adding validation on parsing.warnings to gate Salesforce writes. For discounts/taxes in OLI: Standard OLI fields don’t support per-line discount amounts directly; model them in UnitPrice or custom fields. Replace the Composite API URL with your org’s domain or use the Salesforce node’s Bulk Upsert for simplicity.
by Alexandra Spalato
Who's it for This workflow is for community builders, marketers, consultants, coaches, and thought leaders who want to grow their presence in Skool communities through strategic, value-driven engagement. It's especially useful for professionals who want to: Build authority in their niche by providing helpful insights Scale their community engagement without spending hours manually browsing posts Identify high-value conversation opportunities that align with their expertise Maintain authentic, helpful presence across multiple Skool communities What problem is this workflow solving Many professionals struggle to consistently engage meaningfully in online communities due to: Time constraints**: Manually browsing multiple communities daily is time-consuming Missed opportunities**: Important discussions happen when you're not online Inconsistent engagement**: Sporadic participation reduces visibility and relationship building Generic responses**: Quick replies often lack the depth needed to showcase expertise This workflow solves these problems by automatically monitoring your target Skool communities, using AI to identify posts where your expertise could add genuine value, generating thoughtful contextual comment suggestions, and organizing opportunities for efficient manual review and engagement. How it works Scheduled Community Monitoring Runs daily at 7 PM to scan your configured Skool communities for new posts and discussions from the last 24 hours. Intelligent Configuration Management Pulls settings from Airtable including target communities, your domain expertise, and preferred tools Possibility to add several configurations Filters for active configurations only Processes multiple community URLs efficiently Comprehensive Data Extraction Uses Apify Skool Scraper to collect: Post content and metadata Comments over 50 characters (quality filter) Direct links for easy access AI-Powered Opportunity Analysis Leverages OpenAI GPT-4.1 to: Analyze each post for engagement opportunities based on your expertise Identify specific trigger sentences that indicate a need you can address Generate contextual, helpful comment suggestions Maintain authentic tone without being promotional Smart Filtering and Organization Only surfaces genuine opportunities where you can add value Stores results in Airtable with detailed reasoning Provides suggested comments ready for review and posting Tracks engagement history to avoid duplicate responses Quality Control and Review All opportunities are saved to Airtable where you can: Review AI reasoning and suggested responses Edit comments before posting Track which opportunities you've acted on Monitor success patterns over time How to set up Required credentials OpenAI API key** - For GPT-4.1 powered opportunity analysis Airtable Personal Access Token** - For configuration and results storage Apify API token** - For Skool community scraping Airtable base setup Create an Airtable base with two tables: Config Table (config): Name (Single line text): Your configuration name Skool URLs (Long text): Comma-separated list of Skool community URLs cookies (Long text): Your Skool session cookies for authenticated access Domain of Activity (Single line text): Your area of expertise (e.g., "AI automation", "Digital marketing") Tools Used (Single line text): Your preferred tools to recommend (e.g., "n8n", "Zapier") active (Checkbox): Whether this configuration is currently active Results Table (Table 1): title (Single line text): Post title/author url (URL): Direct link to the post reason (Long text): AI's reasoning for the opportunity trigger (Long text): Specific sentence that triggered the opportunity suggested answer (Long text): AI-generated comment suggestion config (Link to another record): Reference to the config used date (Date): When the opportunity was found Select (Single select): Status tracking (not commented/commented) Skool cookies setup To access private Skool communities, you'll need to: Install Cookie Editor: Go to Chrome Web Store and install the "Cookie Editor" extension Login to Skool: Navigate to any Skool community you want to monitor and log in Open Cookie Editor: Click the Cookie Editor extension icon in your browser toolbar Export cookies: Click "Export" button in the extension Copy the exported text Add to Airtable: Paste the cookie string into the cookies field in your Airtable config Trigger configuration Ensure the Schedule Trigger is set to your preferred monitoring time Default is 7 PM daily, but adjust based on your target communities' peak activity Requirements Self-hosted n8n or n8n Cloud account** Active Skool community memberships** - You must be a legitimate member of communities you want to monitor OpenAI API credits** Apify subscription** - For reliable Skool data scraping (free tier available) Airtable account** - Free tier sufficient for most use cases How to customize the workflow Modify AI analysis criteria Edit the EvaluateOpportunities And Generate Comments node to: Adjust the opportunity detection sensitivity Modify the comment tone and style Add industry-specific keywords or phrases Change monitoring frequency Update the Schedule Trigger to: Multiple times per day for highly active communities Weekly for slower-moving professional groups Custom intervals based on community activity patterns Customize data collection Modify the Apify scraper settings to: Adjust the time window (currently 24 hours) Change comment length filters (currently >50 characters) Include/exclude media content Modify the number of comments per post Add additional filters Insert filter nodes to: Skip posts from specific users Focus on posts with minimum engagement levels Exclude certain post types or keywords Prioritize posts from influential community members Enhance output options Add nodes after Record Results to: Send Slack/Discord notifications for high-priority opportunities Create calendar events for engagement tasks Export daily summaries to Google Sheets Integrate with CRM systems for lead tracking Example outputs Opportunity analysis result { "opportunity": true, "reason": "The user is struggling with manual social media management tasks that could be automated using n8n workflows.", "trigger_sentence": "I'm spending 3+ hours daily just scheduling posts and responding to comments across all my social accounts.", "suggested_comment": "That sounds exhausting! Have you considered setting up automation workflows? Tools like n8n can handle the scheduling and even help with response suggestions, potentially saving you 80% of that time. The initial setup takes a day but pays dividends long-term." } Airtable record example Title: "Sarah Johnson - Social Media Burnout" URL: https://www.skool.com/community/post/123456 Reason: "User expressing pain point with manual social media management - perfect fit for automation solutions" Trigger: "I'm spending 3+ hours daily just scheduling posts..." Suggested Answer: "That sounds exhausting! Have you considered setting up automation workflows?..." Config: [Your Config Name] Date: 2024-12-09 19:00:00 Status: "not commented" Best practices Authentic engagement Always review and personalize AI suggestions before posting Focus on being genuinely helpful rather than promotional Share experiences and ask follow-up questions Engage in subsequent conversation when people respond Community guidelines Respect each community's rules and culture Avoid over-promotion of your tools or services Build relationships before introducing solutions Contribute value consistently, not just when selling Optimization tips Monitor which types of opportunities convert best A/B test different comment styles and approaches Track engagement metrics on your actual comments Adjust AI prompts based on community feedback