by Incrementors
Financial Insight Automation: Market Cap to Telegram via Bright Data 📊 Description An automated n8n workflow that scrapes financial data from Yahoo Finance using Bright Data, processes market cap information, generates visual charts, and sends comprehensive financial insights directly to Telegram for instant notifications. 🚀 How It Works This workflow operates through a simple three-zone process: 1. Data Input & Trigger User submits a keyword (e.g., "AI", "Crypto", "MSFT") through a form trigger that initiates the financial data collection process. 2. Data Scraping & Processing Bright Data API discovers and scrapes comprehensive financial data from Yahoo Finance, including market cap, stock prices, company profiles, and financial metrics. 3. Visualization & Delivery The system generates interactive market cap charts, saves data to Google Sheets for record-keeping, and sends visual insights to Telegram as PNG images. ⚡ Setup Steps > ⏱️ Estimated Setup Time: 15-20 minutes Prerequisites Active n8n instance (self-hosted or cloud) Bright Data account with Yahoo Finance dataset access Google account for Sheets integration Telegram bot token and chat ID Step 1: Import the Workflow Copy the provided JSON workflow code In n8n: Go to Workflows → + Add workflow → Import from JSON Paste the JSON content and click Import Step 2: Configure Bright Data Integration Set up Bright Data Credentials: In n8n: Navigate to Credentials → + Add credential → HTTP Header Auth Add Authorization header with value: Bearer BRIGHT_DATA_API_KEY Replace BRIGHT_DATA_API_KEY with your actual API key Test the connection to ensure it works properly > Note: The workflow uses dataset ID gd_lmrpz3vxmz972ghd7 for Yahoo Finance data. Ensure you have access to this dataset in your Bright Data dashboard. Step 3: Set up Google Sheets Integration Create a Google Sheet: Go to Google Sheets and create a new spreadsheet Name it "Financial Data Tracker" or similar Copy the Sheet ID from the URL Configure Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update the workflow: Open the "📊 Filtered Output & Save to Sheet" node Replace YOUR_SHEET_ID with your actual Sheet ID Select your Google Sheets credential Step 4: Configure Telegram Bot Set up Telegram Integration: Create a Telegram bot using @BotFather Get your bot token and chat ID In n8n: Credentials → + Add credential → Telegram API Enter your bot token Update the "📤 Send Chart on Telegram" node with your chat ID Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Step 5: Test and Activate Test the workflow: Use the form trigger with a test keyword (e.g., "AAPL") Monitor the execution in n8n Verify data appears in Google Sheets Check for chart delivery on Telegram Activate the workflow: Turn on the workflow using the toggle switch The form trigger will be accessible via the provided webhook URL 📋 Key Features 🔍 Keyword-Based Discovery: Search companies by keyword, ticker, or industry 💰 Comprehensive Financial Data: Market cap, stock prices, earnings, and company profiles 📊 Visual Charts: Automatic generation of market cap comparison charts 📱 Telegram Integration: Instant delivery of insights to your mobile device 💾 Data Storage: Automatic backup to Google Sheets for historical tracking ⚡ Real-time Processing: Fast data retrieval and processing with Bright Data 📊 Output Data Points | Field | Description | Example | |-------|-------------|---------| | Company Name | Full company name | "Apple Inc." | | Stock Ticker | Trading symbol | "AAPL" | | Market Cap | Total market capitalization | "$2.89T" | | Current Price | Latest stock price | "$189.25" | | Exchange | Stock exchange | "NASDAQ" | | Sector | Business sector | "Technology" | | PE Ratio | Price to earnings ratio | "28.45" | | 52 Week Range | Annual high and low prices | "$164.08 - $199.62" | 🔧 Troubleshooting Common Issues Bright Data Connection Failed: Verify your API key is correct and active Check dataset permissions in Bright Data dashboard Ensure you have sufficient credits Google Sheets Permission Denied: Re-authenticate Google Sheets OAuth Verify sheet sharing settings Check if the Sheet ID is correct Telegram Not Receiving Messages: Verify bot token and chat ID Check if bot is added to the chat Test Telegram credentials manually Performance Tips Use specific keywords for better data accuracy Monitor Bright Data usage to control costs Set up error handling for failed requests Consider rate limiting for high-volume usage 🎯 Use Cases Investment Research:** Quick financial analysis of companies and sectors Market Monitoring:** Track market cap changes and stock performance Competitive Analysis:** Compare financial metrics across companies Portfolio Management:** Monitor holdings and potential investments Financial Reporting:** Generate automated financial insights for teams 🔗 Additional Resources n8n Documentation Bright Data Datasets Google Sheets API Telegram Bot API For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Salman Mehboob
Stop writing blog posts manually. This workflow monitors Google News every 12 hours on any topic you choose, automatically selects the most relevant article, scrapes the source content, generates a 100% original SEO-optimized blog post using Claude AI, creates a featured image with Google Gemini, and publishes the complete post to WordPress with RankMath meta title and meta description fully automated, zero manual work. Perfect for bloggers, content marketers, digital agencies, WordPress site owners, and anyone who wants to keep their website updated with fresh, AI-written content without spending hours writing or publishing. What problem does this solve? Publishing fresh, well-written blog content consistently is one of the hardest parts of running a website. Finding relevant news, writing original articles, creating images, and publishing everything to WordPress takes hours every single time. This workflow eliminates all of that. It runs on a schedule, picks the best news story on your topic, writes a completely original article from scratch, and publishes it to your WordPress site, including featured image, meta title, and meta description while you focus on everything else. How it works Scheduled trigger runs every 12 hours automatically No manual action required. The workflow runs on its own twice a day. Fetch latest news via SerpAPI Google News SerpAPI fetches the freshest articles from Google News based on your search query. The default query is seo but you can change it to any topic - tech, finance, health, marketing, AI, real estate, or anything your blog covers. One parameter change is all it takes. Check already-published articles via Google Sheets A Google Sheets node reads all previously published news links from your tracking sheet. This ensures the same article is never processed twice - even if it keeps appearing in Google News days later. Remove duplicates with Merge node The Merge node compares incoming articles against your published history and removes any matches. Only genuinely new, unprocessed articles move forward. Sort by date and limit to top 10 Remaining articles are sorted by publish date, newest first. A Limit node keeps only the top 10 most recent items for evaluation. AI agent selects the single best article All 10 article titles and links are combined into one item and sent to an AI Agent powered by Claude (via OpenRouter). The agent reads every title and picks the one most relevant and valuable for your website's audience - filtering out off-topic, low-quality, or irrelevant results. It returns the winning article's title and link. Scrape the full source article An HTTP Request node fetches the complete HTML of the selected article. An HTML Extract node pulls only the meaningful content - headings (h1, h2, h3) and paragraphs (p) - stripping out ads, navigation menus, and everything else. An Aggregate node joins all extracted text into one clean block ready for the AI. Claude generates a 100% original article A second AI Agent (Claude via OpenRouter) receives the scraped content as research material only. It does not rewrite or paraphrase the source. It writes a completely new, original, SEO-optimized article based on the topic and ideas - with its own structure, wording, and insights. The output includes: Article title Meta title (under 60 characters) Meta description (under 160 characters) Full article body in Markdown format A featured image generation prompt Google Gemini generates the featured image The image prompt produced by Claude is sent to Google Gemini's image generation model. The image is created in 16:9 ratio, suitable as a WordPress featured image. Upload image to WordPress Media Library An HTTP Request node uploads the generated image binary directly to your WordPress site via the REST API. WordPress returns an image ID which is stored for the next step. Convert Markdown to HTML The article content is in Markdown format. A Markdown node converts it to clean HTML before publishing. Arrange all data in one place A Set node collects all required fields - title, HTML content, image ID, meta title, meta description, and original news source link - into one organised item. Create the WordPress post The WordPress node creates the post with title, content, author, and category. You can set the post to draft for review or publish directly. The category ID is configurable to match your site structure. Attach featured image and RankMath SEO meta An HTTP PUT request updates the post to attach the featured image using the stored image ID, and writes the RankMath SEO title and meta description using registered REST API meta fields. Log to Google Sheets The original news link and the published WordPress post URL are saved back to your Google Sheets tracking file. This is what prevents the same article from ever being processed again on future runs. What you need SerpAPI** — for Google News fetching OpenRouter** — to access Claude (anthropic/claude-sonnet-4.5) Google Gemini (PaLM API)** — for featured image generation WordPress** — with Application Password authentication Google Sheets** — with two columns: news_link and wp_post_link RankMath SEO plugin** on WordPress (or adapt for Yoast — see note below) One-time WordPress setup for RankMath meta fields Add this PHP snippet via the Code Snippets plugin or your theme's functions.php to enable writing RankMath SEO title and description through the REST API: add_action('rest_api_init', function () { register_post_meta('post', 'rank_math_title', [ 'show_in_rest' => true, 'single' => true, 'type' => 'string', 'auth_callback' => fn() => current_user_can('edit_posts'), ]); register_post_meta('post', 'rank_math_description', [ 'show_in_rest' => true, 'single' => true, 'type' => 'string', 'auth_callback' => fn() => current_user_can('edit_posts'), ]); }); Using Yoast SEO instead? Replace the meta keys with _yoast_wpseo_title and _yoast_wpseo_metadesc in the last HTTP node. How to customise it for your topic Change the q parameter in the SerpAPI node to any keyword -digital marketing, cryptocurrency, AI tools, content marketing, web design, or anything else. The entire workflow adapts automatically. The AI agent will select the most relevant article for your niche and write accordingly. For assistance and support: salmanmehboob1947@gmail.com Linkedin: https://www.linkedin.com/in/salman-mehboob-pro/
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by Vince V
This workflow automatically generates and delivers professional invoice PDFs whenever a Stripe checkout session completes. It fetches the line items from Stripe, formats them into a clean invoice with your company details, generates a branded PDF via TemplateFox, emails it to the customer, and saves a copy to Google Drive. Problem Solved Without this automation, invoicing after a Stripe payment requires: Monitoring your Stripe dashboard for completed checkouts Manually creating an invoice with the correct line items and totals Exporting as PDF and emailing it to the customer Saving the invoice to your file storage for bookkeeping Repeating this for every single payment This workflow handles all of that automatically for every Stripe checkout, including proper invoice numbering, due dates, and tax calculations. Who Can Benefit SaaS companies** billing customers through Stripe Checkout E-commerce stores** sending invoices after purchase Service providers** using Stripe for client payments Freelancers** who want automatic invoicing after payment Accountants** who need invoice PDFs archived in Google Drive Prerequisites TemplateFox account with an API key (free tier available) Stripe account with API access Gmail account with OAuth2 configured Google Drive account with OAuth2 configured Install the TemplateFox community node from Settings → Community Nodes Setting Up Your Template You need a TemplateFox invoice template for this workflow. You can: Start from an example — Browse invoice templates, pick one you like, and customize it in the visual editor to match your branding Create from scratch — Design your own invoice template in the TemplateFox editor Once your template is ready, select it from the dropdown in the TemplateFox node — no need to copy template IDs manually. Workflow Details Step 1: Stripe Trigger Fires on every completed checkout session (checkout.session.completed). This captures successful payments with full customer and product details. Step 2: Get Line Items An HTTP Request node calls the Stripe API to fetch the line items for the checkout session (product names, quantities, amounts). Stripe doesn't include line items in the webhook payload, so this separate call is required. Step 3: Format Invoice Data A Code node combines the Stripe session data and line items into a clean invoice structure: company details, client info (from Stripe customer), line items with prices, subtotal, tax, total, invoice number (auto-generated from date + session ID), and due date (Net 30). Step 4: TemplateFox — Generate Invoice Select your invoice template from the dropdown — the node automatically loads your template's fields. Map each field to the matching output from the Code node (e.g. client_company → {{ $json.client_company }}). TemplateFox generates a professional invoice PDF using your custom template. Step 5a: Email Invoice Sends the invoice PDF link to the customer via Gmail with invoice number, amount, and due date. Step 5b: Save to Google Drive Downloads the PDF and uploads it to a Google Drive folder for bookkeeping. Runs in parallel with the email step. Customization Guidance Company details:** Set your company name, address, logo, bank details, and VAT number directly in the template editor — they never change between invoices, so there's no reason to pass them from n8n. Invoice numbering:** Modify the invoiceNumber format in the Code node (default: INV-YYYY-MMDD-XXXXXX). Payment terms:** Change the due date calculation (default: Net 30). Drive folder:** Set your Google Drive folder ID in the "Save to Google Drive" node. Template:** Use any invoice template from your TemplateFox account — select it from the dropdown. Email body:** Customize the invoice email text in the "Email Invoice" node. Note: This template uses the TemplateFox community node. Install it from Settings → Community Nodes.
by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Corentin Ribeyre
This template can be used as a real-time listening and processing of search results with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into two steps : A webhook node to link your Icypeas account with your n8n workflow. A set node to retrieve the relevant informations. Set up steps You will need a working icypeas account to run the workflow and you will have to paste the production URL provided by the n8n webhook node.
by Pixril
Overview This workflow deploys a fully autonomous "Viral News Agency" inside your n8n instance. Unlike simple auto-posters, this is a comprehensive content production pipeline. It acts as a 24/7 news monitor that scrapes viral stories, rewrites them into educational scripts using GPT-4o, designs professional 10-slide carousels, and publishes them directly to Instagram Business—completely on autopilot. Key Features Dual-Engine Architecture:* The unique "Hybrid Core" lets you choose between *Free (Gotenberg/Docker)* or *Paid (APITemplate)** image generation. Switch engines instantly via the Setup Form. Smart RSS Scraping:** Cleans incoming feeds and extracts high-quality "OG" (Open Graph) images to use as dynamic backgrounds. Viral Content Writer:** Uses a specialized AI Agent prompt to write "Hot Takes" and educational hooks, ensuring content is engaging, not just a summary. Auto-Publisher:** Handles the complex Meta API flow (Container > Media Bundle > Publish) to upload multi-slide carousels automatically. How it works Monitor: The News Source node watches your chosen RSS feeds (Tech, Sports, Politics, etc.) for breaking stories. Analyze: The AI Analyst (GPT-4o) reads the article, extracts the viral angle, and writes a full 10-slide script with captions and hashtags. Design: The workflow routes data to your chosen engine. It loops through the script 10 times to generate individual slides (Title, Content, Quotes). Publish: The agent uploads the images to Facebook's servers, bundles them into a Carousel Container, and publishes it live to your Instagram feed. Set up steps Estimated time: 10 minutes Credentials: Add your keys for OpenAI (Intelligence), Google Drive (Storage), and Facebook Graph API (Publishing). Instagram ID: Open the 3 Facebook nodes ("Create Container", "Carousel Bundle", "Publish Carousel") and replace the placeholder ID with your Instagram Business User ID. Image Engine: Option A (Free): Ensure you have a local Gotenberg instance running via Docker (docker run --rm -p 3000:3000 gotenberg/gotenberg:8). Option B (Paid): In the "Generate Image" node, add your APITemplate API Key and Template ID. Run: Use the "SETUP FORM" node to enter your RSS URL and Brand Name, then toggle to "Active"! About the Creator Built by Pixril. We specialize in building advanced, production-ready AI agents for n8n. Visit our website: https://www.pixril.com/ Find more professional workflows in our shop: https://pixril.etsy.com
by Le Nguyen
PDF Invoice Extractor (AI) End-to-end pipeline: Watch Drive ➜ Download PDF ➜ OCR text ➜ AI normalize to JSON ➜ Upsert Buyer (Account) ➜ Create Opportunity ➜ Map Products ➜ Create OLI via Composite API ➜ Archive to OneDrive. Node by node (what it does & key setup) 1) Google Drive Trigger Purpose**: Fire when a new file appears in a specific Google Drive folder. Key settings**: Event: fileCreated Folder ID: google drive folder id Polling: everyMinute Creds: googleDriveOAuth2Api Output**: Metadata { id, name, ... } for the new file. 2) Download File From Google Purpose**: Get the file binary for processing and archiving. Key settings**: Operation: download File ID: ={{ $json.id }} Creds: googleDriveOAuth2Api Output**: Binary (default key: data) and original metadata. 3) Extract from File Purpose**: Extract text from PDF (OCR as needed) for AI parsing. Key settings**: Operation: pdf OCR: enable for scanned PDFs (in options) Output**: JSON with OCR text at {{ $json.text }}. 4) Message a model (AI JSON Extractor) Purpose: Convert OCR text into **strict normalized JSON array (invoice schema). Key settings**: Node: @n8n/n8n-nodes-langchain.openAi Model: gpt-4.1 (or gpt-4.1-mini) Message role: system (the strict prompt; references {{ $json.text }}) jsonOutput: true Creds: openAiApi Output (per item): $.message.content → the parsed **JSON (ensure it’s an array). 5) Create or update an account (Salesforce) Purpose: Upsert **Buyer as Account using an external ID. Key settings**: Resource: account Operation: upsert External Id Field: tax_id__c External Id Value: ={{ $json.message.content.buyer.tax_id }} Name: ={{ $json.message.content.buyer.name }} Creds: salesforceOAuth2Api Output: Account record (captures Id) for downstream **Opportunity. 6) Create an opportunity (Salesforce) Purpose**: Create Opportunity linked to the Buyer (Account). Key settings**: Resource: opportunity Name: ={{ $('Message a model').item.json.message.content.invoice.code }} Close Date: ={{ $('Message a model').item.json.message.content.invoice.issue_date }} Stage: Closed Won Amount: ={{ $('Message a model').item.json.message.content.summary.grand_total }} AccountId: ={{ $json.id }} (from Upsert Account output) Creds: salesforceOAuth2Api Output**: Opportunity Id for OLI creation. 7) Build SOQL (Code / JS) Purpose: Collect unique product **codes from AI JSON and build a SOQL query for PricebookEntry by Pricebook2Id. Key settings**: pricebook2Id (hardcoded in script): e.g., 01sxxxxxxxxxxxxxxx Source lines: $('Message a model').first().json.message.content.products Output**: { soql, codes } 8) Query PricebookEntries (Salesforce) Purpose**: Fetch PricebookEntry.Id for each Product2.ProductCode. Key settings**: Resource: search Query: ={{ $json.soql }} Creds: salesforceOAuth2Api Output**: Items with Id, Product2.ProductCode (used for mapping). 9) Code in JavaScript (Build OLI payloads) Purpose: Join lines with PBE results and Opportunity Id ➜ build **OpportunityLineItem payloads. Inputs**: OpportunityId: ={{ $('Create an opportunity').first().json.id }} Lines: ={{ $('Message a model').first().json.message.content.products }} PBE rows: from previous node items Output**: { body: { allOrNone:false, records:[{ OpportunityLineItem... }] } } Notes**: Converts discount_total ➜ per-unit if needed (currently commented for standard pricing). Throws on missing PBE mapping or empty lines. 10) Create Opportunity Line Items (HTTP Request) Purpose**: Bulk create OLIs via Salesforce Composite API. Key settings**: Method: POST URL: https://<your-instance>.my.salesforce.com/services/data/v65.0/composite/sobjects Auth: salesforceOAuth2Api (predefined credential) Body (JSON): ={{ $json.body }} Output**: Composite API results (per-record statuses). 11) Update File to One Drive Purpose: Archive the **original PDF in OneDrive. Key settings**: Operation: upload File Name: ={{ $json.name }} Parent Folder ID: onedrive folder id Binary Data: true (from the Download node) Creds: microsoftOneDriveOAuth2Api Output**: Uploaded file metadata. Data flow (wiring) Google Drive Trigger → Download File From Google Download File From Google → Extract from File → Update File to One Drive Extract from File → Message a model Message a model → Create or update an account Create or update an account → Create an opportunity Create an opportunity → Build SOQL Build SOQL → Query PricebookEntries Query PricebookEntries → Code in JavaScript Code in JavaScript → Create Opportunity Line Items Quick setup checklist 🔐 Credentials: Connect Google Drive, OneDrive, Salesforce, OpenAI. 📂 IDs: Drive Folder ID (watch) OneDrive Parent Folder ID (archive) Salesforce Pricebook2Id (in the JS SOQL builder) 🧠 AI Prompt: Use the strict system prompt; jsonOutput = true. 🧾 Field mappings: Buyer tax id/name → Account upsert fields Invoice code/date/amount → Opportunity fields Product name must equal your Product2.ProductCode in SF. ✅ Test: Drop a sample PDF → verify: AI returns array JSON only Account/Opportunity created OLI records created PDF archived to OneDrive Notes & best practices If PDFs are scans, enable OCR in Extract from File. If AI returns non-JSON, keep “Return only a JSON array” as the last line of the prompt and keep jsonOutput enabled. Consider adding validation on parsing.warnings to gate Salesforce writes. For discounts/taxes in OLI: Standard OLI fields don’t support per-line discount amounts directly; model them in UnitPrice or custom fields. Replace the Composite API URL with your org’s domain or use the Salesforce node’s Bulk Upsert for simplicity.
by Alexandra Spalato
Who's it for This workflow is for community builders, marketers, consultants, coaches, and thought leaders who want to grow their presence in Skool communities through strategic, value-driven engagement. It's especially useful for professionals who want to: Build authority in their niche by providing helpful insights Scale their community engagement without spending hours manually browsing posts Identify high-value conversation opportunities that align with their expertise Maintain authentic, helpful presence across multiple Skool communities What problem is this workflow solving Many professionals struggle to consistently engage meaningfully in online communities due to: Time constraints**: Manually browsing multiple communities daily is time-consuming Missed opportunities**: Important discussions happen when you're not online Inconsistent engagement**: Sporadic participation reduces visibility and relationship building Generic responses**: Quick replies often lack the depth needed to showcase expertise This workflow solves these problems by automatically monitoring your target Skool communities, using AI to identify posts where your expertise could add genuine value, generating thoughtful contextual comment suggestions, and organizing opportunities for efficient manual review and engagement. How it works Scheduled Community Monitoring Runs daily at 7 PM to scan your configured Skool communities for new posts and discussions from the last 24 hours. Intelligent Configuration Management Pulls settings from Airtable including target communities, your domain expertise, and preferred tools Possibility to add several configurations Filters for active configurations only Processes multiple community URLs efficiently Comprehensive Data Extraction Uses Apify Skool Scraper to collect: Post content and metadata Comments over 50 characters (quality filter) Direct links for easy access AI-Powered Opportunity Analysis Leverages OpenAI GPT-4.1 to: Analyze each post for engagement opportunities based on your expertise Identify specific trigger sentences that indicate a need you can address Generate contextual, helpful comment suggestions Maintain authentic tone without being promotional Smart Filtering and Organization Only surfaces genuine opportunities where you can add value Stores results in Airtable with detailed reasoning Provides suggested comments ready for review and posting Tracks engagement history to avoid duplicate responses Quality Control and Review All opportunities are saved to Airtable where you can: Review AI reasoning and suggested responses Edit comments before posting Track which opportunities you've acted on Monitor success patterns over time How to set up Required credentials OpenAI API key** - For GPT-4.1 powered opportunity analysis Airtable Personal Access Token** - For configuration and results storage Apify API token** - For Skool community scraping Airtable base setup Create an Airtable base with two tables: Config Table (config): Name (Single line text): Your configuration name Skool URLs (Long text): Comma-separated list of Skool community URLs cookies (Long text): Your Skool session cookies for authenticated access Domain of Activity (Single line text): Your area of expertise (e.g., "AI automation", "Digital marketing") Tools Used (Single line text): Your preferred tools to recommend (e.g., "n8n", "Zapier") active (Checkbox): Whether this configuration is currently active Results Table (Table 1): title (Single line text): Post title/author url (URL): Direct link to the post reason (Long text): AI's reasoning for the opportunity trigger (Long text): Specific sentence that triggered the opportunity suggested answer (Long text): AI-generated comment suggestion config (Link to another record): Reference to the config used date (Date): When the opportunity was found Select (Single select): Status tracking (not commented/commented) Skool cookies setup To access private Skool communities, you'll need to: Install Cookie Editor: Go to Chrome Web Store and install the "Cookie Editor" extension Login to Skool: Navigate to any Skool community you want to monitor and log in Open Cookie Editor: Click the Cookie Editor extension icon in your browser toolbar Export cookies: Click "Export" button in the extension Copy the exported text Add to Airtable: Paste the cookie string into the cookies field in your Airtable config Trigger configuration Ensure the Schedule Trigger is set to your preferred monitoring time Default is 7 PM daily, but adjust based on your target communities' peak activity Requirements Self-hosted n8n or n8n Cloud account** Active Skool community memberships** - You must be a legitimate member of communities you want to monitor OpenAI API credits** Apify subscription** - For reliable Skool data scraping (free tier available) Airtable account** - Free tier sufficient for most use cases How to customize the workflow Modify AI analysis criteria Edit the EvaluateOpportunities And Generate Comments node to: Adjust the opportunity detection sensitivity Modify the comment tone and style Add industry-specific keywords or phrases Change monitoring frequency Update the Schedule Trigger to: Multiple times per day for highly active communities Weekly for slower-moving professional groups Custom intervals based on community activity patterns Customize data collection Modify the Apify scraper settings to: Adjust the time window (currently 24 hours) Change comment length filters (currently >50 characters) Include/exclude media content Modify the number of comments per post Add additional filters Insert filter nodes to: Skip posts from specific users Focus on posts with minimum engagement levels Exclude certain post types or keywords Prioritize posts from influential community members Enhance output options Add nodes after Record Results to: Send Slack/Discord notifications for high-priority opportunities Create calendar events for engagement tasks Export daily summaries to Google Sheets Integrate with CRM systems for lead tracking Example outputs Opportunity analysis result { "opportunity": true, "reason": "The user is struggling with manual social media management tasks that could be automated using n8n workflows.", "trigger_sentence": "I'm spending 3+ hours daily just scheduling posts and responding to comments across all my social accounts.", "suggested_comment": "That sounds exhausting! Have you considered setting up automation workflows? Tools like n8n can handle the scheduling and even help with response suggestions, potentially saving you 80% of that time. The initial setup takes a day but pays dividends long-term." } Airtable record example Title: "Sarah Johnson - Social Media Burnout" URL: https://www.skool.com/community/post/123456 Reason: "User expressing pain point with manual social media management - perfect fit for automation solutions" Trigger: "I'm spending 3+ hours daily just scheduling posts..." Suggested Answer: "That sounds exhausting! Have you considered setting up automation workflows?..." Config: [Your Config Name] Date: 2024-12-09 19:00:00 Status: "not commented" Best practices Authentic engagement Always review and personalize AI suggestions before posting Focus on being genuinely helpful rather than promotional Share experiences and ask follow-up questions Engage in subsequent conversation when people respond Community guidelines Respect each community's rules and culture Avoid over-promotion of your tools or services Build relationships before introducing solutions Contribute value consistently, not just when selling Optimization tips Monitor which types of opportunities convert best A/B test different comment styles and approaches Track engagement metrics on your actual comments Adjust AI prompts based on community feedback
by 飯盛 正幹
Description This workflow automates the process of finding new content ideas by scraping trending news and social media posts, analyzing them with AI, and delivering a summarized report to Slack. It is perfect for content marketers, social media managers, and strategists who spend hours researching trending topics manually. Who is this for Content Marketers: To discover trending topics for blogs or newsletters. Social Media Managers: To keep up with competitor activity or industry news. Market Researchers: To monitor specific keywords or brands. How it works Schedule: The workflow runs automatically on a weekly schedule (default is Monday morning). Data Collection: It uses Apify to scrape the latest news from Google Search and recent posts from specific Facebook pages. Data Processing: The results are merged, and the top 5 most relevant items are selected to prevent information overload. AI Analysis: An AI Agent (powered by OpenRouter/LLM) analyzes each article to classify it into a theme (e.g., Marketing, Technology, Strategy) and extracts 3 catchy keywords. Notification: The analyzed insights, including the theme, keywords, summary, and original URL, are formatted and sent directly to Slack. Requirements Apify Account: You need an API token and access to the Google Search Results Scraper and Facebook Posts Scraper actors. OpenRouter API Key: Used to power the AI analysis (can be swapped for OpenAI/Anthropic if preferred). Slack Account: To receive the notifications. How to set up Configure Credentials: Open the Workflow Configuration node and paste your Apify API Token and OpenRouter API Key. Connect your Slack account in the Slack node. Adjust Apify Settings: In the Apify Google news node, change the search query (currently set to "Top News" in Japanese) to your desired topic. In the Apify Facebook node, update the startUrls to the Facebook pages you want to monitor. Customize AI Prompt: (Optional) Open the AI Agent node to adjust the language or the specific themes you want the AI to classify. How to customize Change the LLM: Replace the OpenRouter model with the OpenAI or Anthropic Chat Model node if you prefer those providers. Increase Data Volume: Adjust the "Limit 5 items" Code node to process more articles at once (mind your API usage limits). Change Destination: Replace the Slack node with Notion, Google Sheets, or Email to save the ideas elsewhere. ⚠️ Crucial Checklist Before Submission The n8n team will reject templates that contain non-English text in the nodes. Please apply these changes to your workflow in the n8n editor before exporting the JSON for submission: Rename Nodes to English: Cron トリガー → Schedule Trigger Function: Slackメッセージ作成 → Format Slack Message Slack: 企画ネタ投稿 → Slack Post Function: LLMレスポンス整形 → Parse LLM Response Merge Data (統合) → Merge Data Function: データ抽出・5件制限 → Limit to 5 Items Function: Googleデータ抽出 → Extract Google Data Function: Facebookデータ抽出 → Extract FB Data Translate Code Comments & Prompts: Inside the Code nodes, ensure comments are in English (e.g., // Slackへの投稿メッセージを作成します → // Create message for Slack). Inside the AI Agent node, translate the System Prompt into English (e.g., "You are a professional content planner..." instead of "あなたはプロの..."). Even if you want the output in Japanese, the template default should usually be English, or clearly labeled as a Japanese template. Add the Mandatory Sticky Note: Add a Yellow Sticky Note to the canvas. Paste the "Description" text (from step 2 above) into this sticky note. Place it clearly next to the start of the workflow. Remove Hardcoded IDs: Ensure MASKED_USER_ID and MASKED_WEBHOOK_ID are cleared out or set to expressions that reference the user's setup.
by Onur
Amazon Product Scraper with Scrape.do & AI Enrichment > This workflow is a fully automated Amazon product data extraction engine. It reads product URLs from a Google Sheet, uses Scrape.do to reliably fetch each product page’s HTML without getting blocked, and then applies an AI-powered extraction process to capture key product details such as name, price, rating, review count, and description. All structured results are neatly stored back into a Google Sheet for easy access and analysis. This template is designed for consistency and scalability—ideal for marketers, analysts, and e-commerce professionals who need clean product data at scale. 🚀 What does this workflow do? Reads Input URLs:** Pulls a list of Amazon product URLs from a Google Sheet. Scrapes HTML Reliably:* Uses *Scrape.do** to bypass Amazon’s anti-bot measures, ensuring the page HTML is always retrieved successfully. Cleans & Pre-processes HTML:** Strips scripts, styles, and unnecessary markup, isolating only relevant sections like title, price, ratings, and feature bullets. AI-Powered Data Extraction:** A LangChain/OpenRouter GPT-4 node verifies and enriches key fields—product name, price, rating, reviews, and description. Stores Structured Results:** Appends all extracted and verified product data to a results tab in Google Sheets. Batch & Loop Control:** Handles multiple URLs efficiently with Split In Batches to process as many products as you need. 🎯 Who is this for? E-commerce Sellers & Dropshippers:** Track competitor prices, ratings, and key product features automatically. Marketing & SEO Teams:** Collect product descriptions and reviews to optimize campaigns and content. Analysts & Data Teams:** Build accurate product databases without manual copy-paste work. ✨ Benefits High Success Rate:* *Scrape.do** handles proxy rotation and CAPTCHA challenges automatically, outperforming traditional scrapers. AI Validation:** LLM verification ensures data accuracy and fills in gaps when HTML elements vary. Full Automation:** Runs on-demand or on a schedule to keep product datasets fresh. Clean Output:** Results are neatly organized in Google Sheets, ready for reporting or integration with other tools. ⚙️ How it Works Manual or Scheduled Trigger: Start the workflow manually or via a cron schedule. Input Source: Fetch URLs from a Google Sheet (TRACK_SHEET_GID). Scrape with Scrape.do: Retrieve full HTML from each Amazon product page using your SCRAPEDO_TOKEN. Clean & Pre-Extract: Strip irrelevant code and use regex to pre-extract key fields. AI Extraction & Verification: LangChain GPT-4 model refines and validates product name, description, price, rating, and reviews. Save Results: Append enriched product data to the results sheet (RESULTS_SHEET_GID). 📋 n8n Nodes Used Manual Trigger / Schedule Trigger Google Sheets (read & append) Split In Batches HTTP Request (Scrape.do) Code (clean & pre-extract HTML) LangChain LLM (OpenRouter GPT-4) Structured Output Parser 🔑 Prerequisites Active n8n instance. Scrape.do API token** (bypasses Amazon anti-bot measures). Google Sheets** with: TRACK_SHEET_GID: tab containing product URLs. RESULTS_SHEET_GID: tab for results. Google Sheets OAuth2 credentials** shared with your service account. OpenRouter / OpenAI API credentials** for the GPT-4 model. 🛠️ Setup Import the Workflow into your n8n instance. Set Workflow Variables: SCRAPEDO_TOKEN – your Scrape.do API key. WEB_SHEET_ID – Google Sheet ID. TRACK_SHEET_GID – sheet/tab name for input URLs. RESULTS_SHEET_GID – sheet/tab name for results. Configure Credentials for Google Sheets and OpenRouter. Map Columns in the “add results” node to match your Google Sheet (e.g., name, price, rating, reviews, description). Run or Schedule: Start manually or configure a schedule for continuous data extraction. This Amazon Product Scraper delivers fast, reliable, and AI-enriched product data, ensuring your e-commerce analytics, pricing strategies, or market research stay accurate and fully automated.
by Kev
⚠️ Important: This workflow uses the Autype community node and requires a self-hosted n8n instance. This workflow reads overdue invoices from a NocoDB database, generates a personalized payment reminder PDF for each record using the Autype Bulk Render API, and sends the resulting ZIP archive by email via SMTP. Days overdue are calculated automatically from the due date at runtime. Supported output formats: PDF, DOCX (Word), ODT. Who is this for? Finance teams, accounting departments, and developers who want to automate recurring document generation from database records. Good fit for payment reminders, invoices, collection letters, dunning notices, or any business correspondence that goes out in batches. What this workflow does It reads all overdue invoices from a NocoDB table, maps each row to a set of document variables, and sends everything to the Autype Bulk Render API in a single batch request. The result is a ZIP archive with one PDF per invoice, which gets sent by email via SMTP on a weekly schedule. The included payment reminder template includes: Company logo in the header, page numbers in the footer Customer name and full address block Invoice details table with USD amounts Styled table with alternating row colors Automatic date insertion via {{date/DD.MM.YYYY}} Days overdue calculated at runtime from due_date (no separate DB column needed) There is also a one-time setup flow (orange sticky note) that creates the Autype project and document template via API. NocoDB Table Structure Create a table called Overdue Invoices with the following columns: | Column | Type | Example | |---|---|---| | customer_name | Text | Jane Smith | | customer_address | Text | 742 Evergreen Terrace, Springfield, IL 62704 | | invoice_number | Text | INV-2026-0042 | | amount_due | Number | 1,250.00 | | due_date | Date | 2026-02-15 | | company_name | Text | TechStart Inc. | > days_overdue is not stored in the database. The workflow calculates it from due_date at runtime. Amounts are rendered in USD. Change the template if you need a different currency. Test Data Use these two sample records to test the workflow: Record 1: | Column | Value | |---|---| | customer_name | Jane Smith | | customer_address | 742 Evergreen Terrace, Springfield, IL 62704 | | invoice_number | INV-2026-0042 | | amount_due | 1250.00 | | due_date | 2026-02-01 | | company_name | TechStart Inc. | Record 2: | Column | Value | |---|---| | customer_name | Robert Chen | | customer_address | 88 Innovation Drive, Suite 400, Austin, TX 73301 | | invoice_number | INV-2026-0078 | | amount_due | 3480.50 | | due_date | 2026-01-15 | | company_name | DataFlow GmbH | How it works One-time setup (run once, then disable): Run Setup Once — triggers the setup flow manually. Create Project — creates an Autype project named "Payment Reminders". Create Document — creates the payment reminder template and returns the document ID. Main flow (weekly): Weekly Schedule — runs every Monday by default. Get Overdue Invoices — fetches all NocoDB rows where due_date < today. Build Bulk Items — maps rows to Autype variable sets and calculates daysOverdue from due_date. Bulk Render Payment Reminders — sends all items in one API call, waits for completion, downloads the ZIP. Send ZIP via Email — sends the ZIP via SMTP to a print service, accounting inbox, or document archive. Setup Install n8n-nodes-autype via Settings → Community Nodes (self-hosted n8n only). Get your API key at app.autype.com → API Keys. Add an Autype API credential in n8n and update YOUR_CREDENTIAL_ID in each Autype node. Set up a NocoDB instance and create the "Overdue Invoices" table with the columns listed above. Add NocoDB API credentials in n8n. Configure SMTP credentials in n8n for email delivery. Run the one-time setup: Click Run Setup Once, then copy the document id from the Create Document output and paste it into the Build Bulk Items code node (replace YOUR_DOCUMENT_ID). Then disable the setup nodes. Tip: It is easier to create and edit templates directly in the Autype web editor. The built-in AI agent can generate a full template from a single prompt. Once saved, the document ID is in the URL, e.g. https://app.autype.com/document/a70a811d-a745-46f8-8eeb-bb9f2eb8cegb. Use the JSON/Markdown switch to inspect the document JSON, or the Bulk tab to check the expected variable structure. Note: This is a community node so it Requires a self-hosted n8n instance. Requirements Self-hosted n8n instance (community nodes are not available on n8n Cloud) Autype account with API key (free tier available, paid plan recommended for bulk rendering) n8n-nodes-autype community node installed NocoDB instance with API access SMTP server for email delivery How to customize Currency:** Change $ {{amountDue}} in the document JSON to any other symbol if needed. Output format:** Set document.type to docx or odt for Word or OpenDocument output. Data source:** The NocoDB node can be swapped for Google Sheets, Airtable, PostgreSQL, MySQL, or any other n8n data source. Just map the field names in the Code node. Document type:** Replace the payment reminder layout with invoices, contracts, certificates, or any other document. Update the template and variable mappings to match. Individual emails:** Use Split In Batches to loop over the output and send each PDF to the corresponding customer directly. Schedule:** Adjust the Schedule Trigger to run daily, monthly, or swap it for a webhook trigger. JSON syntax:** All available document elements are documented in the Autype JSON Syntax Reference. Post-processing:** The Autype Tools API supports watermarks, password protection, compression, merging, and format conversion.