by Bao Duy Nguyen
Who is this for? If you hate SPAM emails or don't want to clean them up frequently. What problem is this workflow solving? It automatically deletes SPAM emails for you, so you don't have to. Save a bit of your time. What this workflow does You can specify when to execute the workflow: daily, weekly, or monthly. Setup Select the preferred execution time Configure credentials for Gmail OAuth2 How to customize this workflow to your needs There's no need. It just works!
by Matthew
Automated Cold Email Personalization This workflow automates the creation of highly personalized cold outreach emails by extracting lead data, scraping company websites, and leveraging AI to craft unique email components. This is ideal for sales teams, marketers, and business development professionals looking to scale their outreach efforts while maintaining a high degree of personalization. How It Works Generate Batches: The workflow starts by generating a sequence of numbers, defining how many leads to process in batches. Scrape Lead Data: It uses an external API (Apify) to pull comprehensive lead information, including contact details, company data, and social media links. Fetch Client Data: The workflow then retrieves relevant client details from your Google Sheet based on the scraped data. Scrape Company Website: The lead's company website is automatically scraped to gather content for personalization. Summarize Prospect Data: An OpenAI model analyzes both the scraped website content and the individual's profile data to create concise summaries and identify unique angles for outreach. Craft Personalized Email: A more advanced OpenAI model uses these summaries and specific instructions to generate the "icebreaker," "intro," and "value proposition" components of a personalized cold email. Update Google Sheet: Finally, these generated email components are saved back into your Google Sheet, enriching your lead records for future outreach. Google Sheet Structure Your Google Sheet must have the following exact column headers to ensure proper data flow: Email** (unique identifier for each lead) Full Name** Headline** LinkdIn** cityName** stateName** company/cityName** Country** Company Name** Website** company/businessIndustry** Keywords** icebreaker** (will be populated by the workflow) intro** (will be populated by the workflow) value\_prop** (will be populated by the workflow) Setup Instructions Add Credentials: In n8n, add your OpenAI API key via the Credentials menu. Connect your Google account via the Credentials menu for Google Sheets access. You will also need an Apify API key for the Scraper node. Configure Google Sheets Nodes: Select the Client data and Add email data to sheet nodes. For each, choose your Google Sheets credential, select your spreadsheet, and the specific sheet name. Ensure all column mappings are correct according to the "Google Sheet Structure" section above. Configure Apify Scraper Node: Select the Scraper node. Update the Authorization header with your Apify API token (Bearer KEY). In the JSON Body, set the searchUrl to your Apollo link (or equivalent source URL for lead data). Configure OpenAI Nodes: Select both Summarising prospect data and Creating detailed email nodes. Choose your OpenAI credential from the dropdown. In the Creating detailed email node's prompt, replace PUT YOUR COMPANY INFO HERE with your company's context and verify the target sector for the email generation. Verify Update Node: On the final Add email data to sheet node, ensure the Operation is set to Append Or Update and the Matching Columns field is set to Email. Customization Options 💡 Trigger: Change the When clicking 'Execute workflow' node to an automatic trigger, such as a **Cron node for daily runs, or a Google Sheets trigger when new rows are added. Lead Generation: Modify the **Code node to change the number of leads processed per run (currently set to 50). Scraping Logic**: Adjust the Scraper node's parameters (e.g., count) or replace the Apify integration with another data source if needed. AI Prompting: Experiment with the prompts in the **Summarising prospect data and Creating detailed email OpenAI nodes to refine the tone, style, length, or content focus of the generated summaries and emails. AI Models**: Test different OpenAI models (e.g., gpt-3.5-turbo, gpt-4o) in the OpenAI nodes to find the optimal balance between cost, speed, and output quality. Data Source/CRM**: Replace the Google Sheets nodes with integrations for your preferred CRM (e.g., HubSpot, Salesforce) or a database (e.g., PostgreSQL, Airtable) to manage your leads.
by Incrementors
Financial Insight Automation: Market Cap to Telegram via Bright Data 📊 Description An automated n8n workflow that scrapes financial data from Yahoo Finance using Bright Data, processes market cap information, generates visual charts, and sends comprehensive financial insights directly to Telegram for instant notifications. 🚀 How It Works This workflow operates through a simple three-zone process: 1. Data Input & Trigger User submits a keyword (e.g., "AI", "Crypto", "MSFT") through a form trigger that initiates the financial data collection process. 2. Data Scraping & Processing Bright Data API discovers and scrapes comprehensive financial data from Yahoo Finance, including market cap, stock prices, company profiles, and financial metrics. 3. Visualization & Delivery The system generates interactive market cap charts, saves data to Google Sheets for record-keeping, and sends visual insights to Telegram as PNG images. ⚡ Setup Steps > ⏱️ Estimated Setup Time: 15-20 minutes Prerequisites Active n8n instance (self-hosted or cloud) Bright Data account with Yahoo Finance dataset access Google account for Sheets integration Telegram bot token and chat ID Step 1: Import the Workflow Copy the provided JSON workflow code In n8n: Go to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Step 2: Configure Bright Data Integration Set up Bright Data Credentials: In n8n: Navigate to Credentials → + Add credential → HTTP Header Auth Add Authorization header with value: Bearer BRIGHT_DATA_API_KEY Replace BRIGHT_DATA_API_KEY with your actual API key Test the connection to ensure it works properly > Note: The workflow uses dataset ID gd_lmrpz3vxmz972ghd7 for Yahoo Finance data. Ensure you have access to this dataset in your Bright Data dashboard. Step 3: Set up Google Sheets Integration Create a Google Sheet: Go to Google Sheets and create a new spreadsheet Name it "Financial Data Tracker" or similar Copy the Sheet ID from the URL Configure Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update the workflow: Open the "📊 Filtered Output & Save to Sheet" node Replace YOUR_SHEET_ID with your actual Sheet ID Select your Google Sheets credential Step 4: Configure Telegram Bot Set up Telegram Integration: Create a Telegram bot using @BotFather Get your bot token and chat ID In n8n: Credentials → + Add credential → Telegram API Enter your bot token Update the "📤 Send Chart on Telegram" node with your chat ID Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 5: Test and Activate Test the workflow: Use the form trigger with a test keyword (e.g., "AAPL") Monitor the execution in n8n Verify data appears in Google Sheets Check for chart delivery on Telegram Activate the workflow: Turn on the workflow using the toggle switch The form trigger will be accessible via the provided webhook URL 📋 Key Features 🔍 Keyword-Based Discovery: Search companies by keyword, ticker, or industry 💰 Comprehensive Financial Data: Market cap, stock prices, earnings, and company profiles 📊 Visual Charts: Automatic generation of market cap comparison charts 📱 Telegram Integration: Instant delivery of insights to your mobile device 💾 Data Storage: Automatic backup to Google Sheets for historical tracking ⚡ Real-time Processing: Fast data retrieval and processing with Bright Data 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | Company Name | Full company name | "Apple Inc." | | Stock Ticker | Trading symbol | "AAPL" | | Market Cap | Total market capitalization | "$2.89T" | | Current Price | Latest stock price | "$189.25" | | Exchange | Stock exchange | "NASDAQ" | | Sector | Business sector | "Technology" | | PE Ratio | Price to earnings ratio | "28.45" | | 52 Week Range | Annual high and low prices | "$164.08 - $199.62" | 🔧 Troubleshooting Common Issues Bright Data Connection Failed: Verify your API key is correct and active Check dataset permissions in Bright Data dashboard Ensure you have sufficient credits Google Sheets Permission Denied: Re-authenticate Google Sheets OAuth Verify sheet sharing settings Check if the Sheet ID is correct Telegram Not Receiving Messages: Verify bot token and chat ID Check if bot is added to the chat Test Telegram credentials manually Performance Tips Use specific keywords for better data accuracy Monitor Bright Data usage to control costs Set up error handling for failed requests Consider rate limiting for high-volume usage 🎯 Use Cases Investment Research:** Quick financial analysis of companies and sectors Market Monitoring:** Track market cap changes and stock performance Competitive Analysis:** Compare financial metrics across companies Portfolio Management:** Monitor holdings and potential investments Financial Reporting:** Generate automated financial insights for teams 🔗 Additional Resources n8n Documentation Bright Data Datasets Google Sheets API Telegram Bot API For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Lucas Peyrin
How it works This workflow is an interactive, hands-on tutorial designed to teach you the absolute basics of JSON (JavaScript Object Notation) and, more importantly, how to use it within n8n. It's perfect for beginners who are new to automation and data structures. The tutorial is structured as a series of simple steps. Each node introduces a new, fundamental concept of JSON: Key/Value Pairs: The basic building block of all JSON. Data Types: It then walks you through the most common data types one by one: String (text) Number (integers and decimals) Boolean (true or false) Null (representing "nothing") Array (an ordered list of items) Object (a collection of key/value pairs) Using JSON with Expressions: The most important step! It shows you how to dynamically pull data from a previous node into a new one using n8n's expressions ({{ }}). Final Exam: A final node puts everything together, building a complete JSON object by referencing data from all the previous steps. Each node has a detailed sticky note explaining the concept in simple terms. Set up steps Setup time: 0 minutes! This is a tutorial workflow, so there is no setup required. Simply click the "Execute Workflow" button to run it. Follow the instructions in the main sticky note: click on each node in order, from top to bottom. For each node, observe the output in the right-hand panel and read the sticky note next to it to understand what you're seeing. By the end, you'll have a solid understanding of what JSON is and how to work with it in your own n8n workflows.
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by Václav Čikl
Overview This workflow automates the entire process of creating professional subtitle (.SRT) and synced lyrics (.LRC) files from audio recordings. Upload your vocal track, let Whisper AI transcribe it with precise timestamps, and GPT-5-nano segments it into natural, singable lyric lines. With an optional quality control step, you can manually refine the output while maintaining perfect timestamp alignment. Key Features Whisper AI Transcription**: Word-level timestamps with multi-language support via ISO codes Intelligent Segmentation**: GPT-5-nano formats transcriptions into natural lyric lines (2-8 words per line) Quality Control Option**: Download, edit, and re-upload corrections with smart timestamp matching Advanced Alignment**: Levenshtein distance algorithm preserves timestamps during manual edits Dual Format Export**: Generate both .SRT (video subtitles) and .LRC (synced lyrics) files No Storage Needed**: Files generated in-memory for instant download Multi-Language**: Supports various languages through Whisper API Use Cases Generate synced lyrics for music video releases on YouTube Create .LRC files for Musixmatch, Apple Music, and Spotify Prepare professional subtitles for social media content Batch process subtitle files for catalog releases Maintain consistent lyric formatting across artists Streamline content delivery for streaming platforms Speed up video editing workflow Perfect For For Musicians & Artists For Record Labels For Content Creators What You'll Need Required Setup OpenAI API Key** for Whisper transcription and GPT-5-nano segmentation Recommended Input Format**: MP3 audio files (max 25MB) Content**: Clean vocal tracks work best (isolated vocals recommended, but whole tracks works still good) Languages**: Any language supported by Whisper (specify via ISO code) How It Works Automatic Mode (No Quality Check) Upload your MP3 vocal track to the workflow Transcription: Whisper AI processes audio with word-level timestamps Segmentation: GPT-5-nano formats text into natural lyric lines Generation: Workflow creates .SRT and .LRC files Download your ready-to-use subtitle files Manual Quality Control Mode Upload your MP3 vocal track and enable quality check Transcription: Whisper AI processes audio with timestamps Initial Segmentation: GPT-5-nano creates first draft Download the .TXT file for review Edit lyrics in any text editor (keep line structure intact) Re-upload corrected .TXT file Smart Matching: Advanced diff algorithm aligns changes with original timestamps Download final .SRT and .LRC files with perfect timing Technical Details Transcription API**: OpenAI Whisper (/v1/audio/transcriptions) Segmentation Model**: GPT-5-nano with custom lyric-focused prompt System Prompt*: *"You are helping with preparing song lyrics for musicians. Take the following transcription and split it into lyric-like lines. Keep lines short (2–8 words), natural for singing/rap phrasing, and do not change the wording." Timestamp Matching**: Levenshtein distance + alignment algorithm File Size Limit**: 25MB (n8n platform default) Processing**: All in-memory, no disk storage Cost**: Based on Whisper API usage (varies with audio length) Output Formats .SRT (SubRip Subtitle) Standard format for: YouTube video subtitles Video editing software (Premiere, DaVinci Resolve, etc.) Media players (VLC, etc.) .LRC (Lyric File) Synced lyrics format for: Musixmatch Apple Music Spotify Music streaming services Audio players with lyrics display Pro Tips 💡 For Best Results: Use isolated vocal tracks when possible (remove instrumentals) Ensure clear recordings with minimal background noise For quality check edits, only modify text content—don't change line breaks Test with shorter tracks first to optimize your workflow ⚙️ Customization Options: Adjust GPT segmentation style by modifying the system prompt Add language detection or force specific languages in Whisper settings Customize output file naming conventions in final nodes Extend workflow with additional format exports if needed Workflow Components Audio Input: Upload interface for MP3 files Whisper Transcribe: OpenAI API call with timestamp extraction Post-Processing: GPT-5-nano segmentation into lyric format Routing Quality Check: Decision point for manual review Timestamp Matching: Diff and alignment for corrected text Subtitles Preparation: JSON formatting for both output types File Generation: Convert to .SRT and .LRC formats Download Nodes: Export final files Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by vinci-king-01
Certification Requirement Tracker with Rocket.Chat and GitLab ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Pre-conditions/Requirements Prerequisites An n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed and activated Rocket.Chat workspace with Incoming Webhook or user credentials GitLab account with at least one repository and a Personal Access Token (PAT) Access URLs for all certification bodies or industry associations you want to monitor Required Credentials ScrapeGraphAI API Key** – Enables web scraping services Rocket.Chat Credentials** – Either: Webhook URL, or Username & Password / Personal Access Token GitLab Personal Access Token** – To create issues and comments via API Specific Setup Requirements | Service | Requirement | Example/Notes | | ------------- | ---------------------------------------------- | ---------------------------------------------------- | | Rocket.Chat | Incoming Webhook URL OR user credentials | https://chat.example.com/hooks/abc123… | | GitLab | Personal Access Token with api scope | Generate at Settings → Access Tokens | | ScrapeGraphAI | Domain whitelist (if running behind firewall) | Allow outbound HTTPS traffic to target sites | | Cron Schedule | Annual (default) or custom interval | 0 0 1 1 * for 1-Jan every year | How it works This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Key Steps: Scheduled Trigger**: Fires annually (or any chosen interval) to start the check. Set Node – URL List**: Stores an array of certification-body URLs to scrape. Split in Batches**: Iterates over each URL for parallel scraping. ScrapeGraphAI**: Extracts requirement text, effective dates, and renewal info. Code Node – Diff Checker**: Compares the newly scraped data with last year’s GitLab issue (if any) to detect changes. IF Node – Requirements Changed?**: Routes the flow based on change detection. GitLab – Create/Update Issue**: Opens a new issue or comments on an existing one with details of the change. Rocket.Chat – Notify Channel**: Sends a message summarizing any changes and linking to the GitLab issue. Merge Node**: Collects all branch results for a final summary report. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, navigate to Settings → Community Nodes and install “ScrapeGraphAI”. Add Credentials: a. In Credentials, create “ScrapeGraphAI API”. b. Add your Rocket.Chat Webhook or PAT. c. Add your GitLab PAT with api scope. Import Workflow: Copy the JSON template into n8n (Workflows → Import). Configure URL List: Open the Set – URL List node and replace the sample array with real certification URLs. Adjust Cron Expression: Double-click the Schedule Trigger node and set your desired frequency. Customize Rocket.Chat Channel: In the Rocket.Chat – Notify node, set the channel or use an incoming webhook. Run Once for Testing: Execute the workflow manually to ensure issues and notifications are created as expected. Activate Workflow: Toggle Activate so the schedule starts running automatically. Node Descriptions Core Workflow Nodes: stickyNote – Workflow Notes**: Contains a high-level diagram and documentation inside the editor. Schedule Trigger** – Initiates the yearly check. Set (URL List)** – Holds certification body URLs and meta info. SplitInBatches** – Iterates through each URL in manageable chunks. ScrapeGraphAI** – Scrapes each certification page and returns structured JSON. Code (Diff Checker)** – Compares the current scrape with historical data. If – Requirements Changed?** – Switches path based on diff result. GitLab** – Creates or updates issues, attaches JSON diff, sets labels (certification, renewal). Rocket.Chat** – Posts a summary message with links to the GitLab issue(s). Merge** – Consolidates batch results for final logging. Set (Success)** – Formats a concise success payload. Data Flow: Schedule Trigger → Set (URL List) → SplitInBatches → ScrapeGraphAI → Code (Diff Checker) → If → GitLab / Rocket.Chat → Merge Customization Examples Add Additional Metadata to GitLab Issue // Inside the GitLab "Create Issue" node ↗️ { "title": Certification Update: ${$json.domain}, "description": What's Changed?\n${$json.diff}\n\n_Last checked: {{$now}}_, "labels": "certification,compliance," + $json.industry } Customize Rocket.Chat Message Formatting // Rocket.Chat node → JSON parameters { "text": :bell: Certification Update Detected\n>${$json.domain}\n>See the GitLab issue: ${$json.issueUrl} } Data Output Format The workflow outputs structured JSON data: { "domain": "example-cert-body.org", "scrapeDate": "2024-01-01T00:00:00Z", "oldRequirements": "Original text …", "newRequirements": "Updated text …", "diff": "- Continuous education hours increased from 20 to 24\n- Fee changed to $200", "issueUrl": "https://gitlab.com/org/compliance/-/issues/42", "notification": "sent" } Troubleshooting Common Issues No data returned from ScrapeGraphAI – Confirm the target site is publicly accessible and not blocking bots. Whitelist the domain or add proper headers via ScrapeGraphAI options. GitLab issue not created – Check that the PAT has api scope and the project ID is correct in the GitLab node. Rocket.Chat message fails – Verify webhook URL or credentials and ensure the channel exists. Performance Tips Limit the batch size in SplitInBatches to avoid API rate limits. Schedule the workflow during off-peak hours to minimize load. Pro Tips: Store last-year scrapes in a dedicated GitLab repository to create a complete change log history. Use n8n’s built-in Execution History Pruning to keep the database slim. Add an Error Trigger workflow to notify you if any step fails.
by MUHAMMAD SHAHEER
Who’s it for This template is designed for creators, researchers, freelance writers, founders, and automation professionals who want a reliable way to generate structured, citation-backed research content without doing manual data collection. Anyone creating blog posts, reports, briefs, or research summaries will benefit from this system. What it does This workflow turns a simple form submission into a complete research pipeline. It accepts a topic, determines what needs to be researched, gathers information from the web, writes content, fact-checks it against the collected sources, edits the draft for clarity, and compiles a final report. It behaves like a small agentic research team inside n8n. How it works A form collects the research topic, depth, and desired output format. A research agent generates focused search queries. SERP API retrieves real-time results for each query. The workflow aggregates and structures all findings. A writing agent creates the first draft based on the data. A fact-checking agent verifies statements against the sources. An editor agent improves tone, flow, and structure. A final review agent produces the completed research document with citations. This workflow includes annotated sticky notes to explain each step and guide configuration. Requirements Groq API key for running the Llama 3.3 model. SERP API key for performing web searches. An n8n instance (cloud or self-hosted). No additional dependencies are required. How to set up Add your Groq and SERP API credentials using n8n’s credential manager. Update the form fields if you want custom depth or output formats. Follow the sticky notes for detailed configuration. Run the workflow and submit a topic through the form to generate your first research report. How to customize Replace the writer agent with a different model if you prefer a specific writing style. Adjust the number of search queries or SERP results for deeper research. Add additional steps such as PDF generation, sending outputs to Notion, or publishing to WordPress. Modify the form to suit industry-specific content needs.
by Mohamed Salama
Let AI agents fetch communicate with your Bubble app automatically. It connects direcly with your Bubble data API. This workflow is designed for teams building AI tools or copilots that need seamless access to Bubble backend data via natural language queries. How it works Triggered via a webhook from an AI agent using the MCP (Model-Chain Prompt) protocol. The agent selects the appropriate data tool (e.g., projects, user, bookings) based on user intent. The workflow queries your Bubble database and returns the result. Ideal for integrating with ChatGPT, n8n AI-Agents, assistants, or autonomous workflows that need real-time access to app data. Set up steps Enable access to your Bubble data or backend APIs (as needed). Create a Bubble admin token. Add your Bubble node/s to your n8n workflow. Add your Bubble admin token. Configer your Bubble node/s. Copy the generated webhook URL from the MCP Server Trigger node and register it with your AI tool (e.g., LangChain tool loader). (Optional) Adjust filters in the “Get an Object Details” node to match your dataset needs. Once connected, your AI agents can automatically retrieve context-aware data from your Bubble app, no manual lookups required.
by vinci-king-01
How it works This workflow automatically scrapes commercial real estate listings from LoopNet and sends opportunity alerts to Telegram while logging data to Google Sheets. Key Steps Scheduled Trigger - Runs every 24 hours to collect fresh CRE market data AI-Powered Scraping - Uses ScrapeGraphAI to extract property information from LoopNet Market Analysis - Analyzes listings for opportunities and generates market insights Smart Notifications - Sends Telegram alerts only when investment opportunities are found Data Logging - Stores daily market metrics in Google Sheets for trend analysis Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping Set up Telegram connection - Connect your Telegram bot and specify the target channel Configure Google Sheets - Set up Google Sheets integration for data logging Customize the LoopNet URL - Update the URL to target specific CRE markets or property types Adjust schedule - Modify the trigger timing based on your market monitoring needs Keep detailed configuration notes in sticky notes inside your workflow
by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by n8n Team
This workflow creates or updates a Mautic contact when a new event is scheduled in Calendly. The first name and email address are the only two fields that get updated. Prerequisites Calendly account and Calendly credentials. Mautic account and Mautic credentials. How it works Triggers on a new event in Calendly. Creates a new contact in Mautic if the email address does not exist in Mautic. Updates the contact's first name in Mautic if the email address exists in Mautic.