by Andrey
Overview This n8n workflow automates brand monitoring across social media platforms (Reddit, LinkedIn, X, and Instagram) using the AnySite API. It searches posts mentioning your defined keywords, stores results in n8n Data Tables, analyzes engagement and sentiment, and generates a detailed AI-powered social media report automatically sent to your email. Key Features Multi-Platform Monitoring:** Reddit, LinkedIn, X (Twitter), and Instagram Automated Post Collection:** Searches for new posts containing tracked keywords Data Persistence:** Saves all posts and comments in structured Data Tables AI-Powered Reporting:** Uses GPT (OpenAI API) to summarize and analyze trends, engagement, and risks Automated Email Delivery:** Sends comprehensive daily/weekly reports via Gmail Comment Extraction:** Collects and formats post comments for deeper sentiment analysis Scheduling Support:** Can be executed manually or automatically (e.g., every night) How It Works Triggers The workflow runs: Automatically (via Schedule Trigger) — e.g., once daily Manually (via Manual Trigger) — for testing or on-demand analysis Data Collection Process Keyword Loading: Reads all keywords from the Data Table “Brand Monitoring Words” Social Media Search: For each keyword, the workflow calls the AnySite API endpoints: api/reddit/search/posts api/linkedin/search/posts api/twitter/search/posts (X) api/instagram/search/posts Deduplication: Before saving, checks if a post already exists in the “Brand Monitoring Posts” table. Data Storage: Inserts new posts into the Data Table with fields like type, title, url, vote_count, comment_count, etc. Comments Enrichment: For Reddit and LinkedIn, retrieves and formats comments into JSON strings, then updates the record. AI Analysis & Report Generation: The AI Agent (OpenAI GPT model) aggregates posts, analyzes sentiment, engagement, risks, and generates a structured HTML email report. Email Sending: Sends the final report via Gmail using your connected account. Setup Instructions Requirements Self-hosted or cloud n8n instance AnySite API key** – https://AnySite.io OpenAI API key** (GPT-4o or later) Connected Gmail account (for report delivery) Installation Steps Import the workflow Import the provided file: Social Media Monitoring.json Configure credentials AnySite API: Add access-token header with your API key OpenAI: Add your OpenAI API key in the “OpenAI Chat Model” node Gmail: Connect your Gmail account (OAuth2) in the “Send a message in Gmail” node Create required Data Tables 1️⃣ Brand Monitoring Words | Field | Type | Description | |-------|------|-------------| | word | string | Keyword or brand name to monitor | > Each row represents a single keyword to be tracked. 2️⃣ Brand Monitoring Posts | Field | Type | Description | |-------|------|-------------| | type | string | Platform type (e.g., reddit, linkedin, x, instagram) | | title | string | Post title or headline | | url | string | Direct link to post | | created_at | string | Post creation date/time | | subreddit_id | string | (Reddit only) subreddit ID | | subreddit_alias | string | (Reddit only) subreddit alias | | subreddit_url | string | (Reddit only) subreddit URL | | subreddit_description | string | (Reddit only) subreddit description | | comment_count | number | Number of comments | | vote_count | number | Votes, likes, or reactions count | | subreddit_member_count | number | (Reddit only) member count | | post_id | string | Unique post identifier | | text | string | Post body text | | comments | string | Serialized comments (JSON string) | | word | string | Matched keyword that triggered capture | AI Reporting Logic Collects all posts gathered during the run Aggregates by keyword and platform Evaluates sentiment, engagement, and risk signals Summarizes findings with an executive summary and key metrics Sends the Social Media Intelligence Report to your configured email Customization Options Schedule:** Adjust the trigger frequency (daily, hourly, etc.) Keywords:* Add or remove keywords in the *Brand Monitoring Words** table Report Depth:** Modify system prompts in the “AI Agent” node to customize tone and analysis focus Email Recipient:** Change the target email address in the “Send a message in Gmail” node Troubleshooting | Issue | Solution | |-------|-----------| | No posts found | Check AnySite API key and keyword relevance | | Duplicate posts | Verify Data Table deduplication setup | | Report not sent | Confirm Gmail OAuth2 connection | | AI Agent error | Ensure OpenAI API key and model selection are correct | Best Practices Use specific brand or product names in keywords for better precision Run the workflow daily to maintain fresh insights Periodically review and clean Data Tables Adjust AI prompt parameters to refine analytical tone Review AI-generated reports to ensure data quality Author Notes Created for automated cross-platform brand reputation monitoring, enabling real-time insights into how your brand is discussed online.
by Takumi Oku
Who’s it for This template is designed for Print-on-Demand (POD) business owners, independent artists, and e-commerce managers who want to automate the process of turning raw design files into listed products without manual data entry. How it works This workflow acts as an automated merchandise factory that handles everything from image processing to marketing. Trigger: The workflow starts when a new design file is uploaded to a specific Google Drive folder. Analyze: OpenAI Vision analyzes the image to determine the subject, mood, and color palette, and assesses copyright risk. Process: The image background is removed using Remove.bg, and the clean asset is uploaded to Cloudinary. Mockup: The workflow generates realistic product mockups (e.g., T-shirts, Tote bags) by overlaying the design onto base product images using Cloudinary transformations. Copywriting: OpenAI writes an SEO-friendly product title, description, and tags based on the visual analysis. Draft: A draft product is created in Shopify with the generated details and mockup image. Approval: A message is sent to Slack with the product details and mockup. The workflow pauses and waits for a human to click "Approve" or "Reject". Publish & Promote: If approved, the product is published to Shopify and automatically posted to Instagram and Pinterest. If rejected, a notification is sent to Slack. How to set up Base Images: Upload your blank product images (e.g., a white t-shirt, a tote bag) to your Cloudinary account and note their Public IDs. Configuration: Open the Workflow Configuration node and fill in all the required fields, including your API keys and the Cloudinary Public IDs for your base products. Credentials: Configure the credentials for Google Drive, OpenAI, Shopify, Slack, Instagram, and Pinterest in their respective nodes. Folder ID: Update the Google Drive Trigger node with the ID of the folder you want to watch. Requirements n8n (Self-hosted or Cloud) Google Drive account OpenAI API key (Access to GPT-4o model recommended for Vision capabilities) Remove.bg API key Cloudinary account Shopify store Slack workspace Instagram Business account Pinterest account How to customize Mockups: You can modify the Code - Generate Mockup URLs node to add more product types (e.g., Hoodies, Mugs) by adding their Cloudinary Public IDs. Prompt Engineering: Adjust the system prompt in the OpenAI - SEO Copywriting node to match your brand voice or language style. Social Channels: Add or remove nodes to support other platforms like Twitter (X) or Facebook Pages.
by Yaron Been
Description This workflow automatically scans companies for signs of financial distress across filings, insolvency registers, and financial news. It helps procurement, credit, and risk teams detect early warning signals before a supplier or partner defaults. Overview This workflow uses Bright Data to scrape financial filings, insolvency registers, and news sources for distress signals like bankruptcy, restructuring, or payment defaults. AI classifies the type and severity of distress, applies probability weighting and confidence guardrails, then generates structured business decisions — including: Supplier Monitoring risk status Onboarding Approval recommendations Portfolio Exposure classifications All outputs are logged into Google Sheets for tracking and auditability. Tools Used n8n**: Automation platform orchestrating the workflow Bright Data**: Scrapes filings, insolvency registers, and financial news without getting blocked OpenRouter**: AI-powered distress classification, risk scoring, and business decision generation Google Sheets**: Logs supplier risk status, onboarding decisions, portfolio exposure, and errors How to Install 1. Import the Workflow Download the .json file and import it into your n8n instance. 2. Configure Bright Data Add your Bright Data API credentials to all Bright Data nodes. 3. Configure OpenRouter Add your OpenRouter API key for AI distress classification and decision generation. 4. Set Up Google Sheets Create a spreadsheet following the "Google Sheets Setup" sticky note inside the workflow. Connect each Google Sheets node to your document. 5. Customize Edit the configuration node to define: Target company Country Risk indicators Monitoring scope Use Cases Procurement Teams Monitor supplier financial health and get alerts before disruptions hit your supply chain. Credit Risk Analysts Screen new vendors or partners for bankruptcy signals and insolvency red flags. Onboarding Workflows Automate go/no-go decisions for new supplier or partner approvals. Portfolio Managers Track financial exposure across your vendor or investment portfolio. Finance Teams Detect early signs of distress in key business relationships before they become critical. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) Tags #n8n #automation #brightdata #webscraping #creditrisk #financialdistress #riskmanagement #suppliermonitoring #supplychainrisk #insolvency #bankruptcy #duediligence #vendorscreening #portfoliorisk #financialanalysis #n8nworkflow #workflow #nocode #businessintelligence #riskassessment #creditanalysis #procurementautomation #supplierrisk #financialmonitoring #earlywarning
by shae
How it works This Lead Capture & Auto-Qualification workflow transforms raw leads into qualified prospects through intelligent automation. Here's the high-level flow: Lead Intake → Data Validation → Enrichment → Scoring → Smart Routing → CRM Integration & Notifications The system captures leads from any source, validates the data, enriches it with company intelligence, scores based on qualification criteria, and automatically routes high-value prospects to sales while nurturing lower-priority leads. Set up steps Time to set up: Approximately 30-45 minutes Prerequisites: Active accounts with HubSpot, Clearbit, Apollo, and Slack Step 1: Import Workflow (2 minutes) Copy the workflow JSON and import into your n8n instance The workflow will appear with all nodes and sticky note documentation Step 2: Configure Environment Variables (5 minutes) Set these in your n8n environment: APOLLO_API_URL SLACK_SALES_CHANNEL_ID SLACK_MARKETING_CHANNEL_ID CRM_ASSIGNMENT_URL Step 3: Set Up API Credentials (15 minutes) Create credential connections for: Clearbit API (enrichment) Apollo API (HTTP Header Auth) HubSpot API (CRM integration) Slack API (notifications) Step 4: Customize Scoring Logic (10 minutes) Review the qualification criteria in the Code node Adjust scoring weights based on your ideal customer profile Modify industry targeting and company size thresholds Step 5: Test & Activate (8 minutes) Send test webhook requests to validate the flow Verify CRM contact creation and Slack notifications Activate the workflow for live lead processing
by PDF Vector
Overview Transform your accounts payable department with this enterprise-grade invoice processing solution. This workflow automates the entire invoice lifecycle - from document ingestion through payment processing. It handles invoices from multiple sources (Google Drive, email attachments, API submissions), extracts data using AI, validates against purchase orders, routes for appropriate approvals based on amount thresholds, and integrates seamlessly with your ERP system. The solution includes vendor master data management, duplicate invoice detection, real-time spend analytics, and complete audit trails for compliance. What You Can Do This comprehensive workflow creates an intelligent invoice processing pipeline that monitors multiple input channels (Google Drive, email, webhooks) for new invoices and automatically extracts data from PDFs, images, and scanned documents using AI. It validates vendor information against your master database, matches invoices to purchase orders, and detects discrepancies. The workflow implements multi-level approval routing based on invoice amount and department, prevents duplicate payments through intelligent matching algorithms, and integrates with QuickBooks, SAP, or other ERP systems. Additionally, it generates real-time dashboards showing processing metrics and cash flow insights while sending automated reminders for pending approvals. Who It's For Perfect for medium to large businesses, accounting departments, and financial service providers processing more than 100 invoices monthly across multiple vendors. Ideal for organizations that need to enforce approval hierarchies and spending limits, require integration with existing ERP/accounting systems, want to reduce processing time from days to minutes, need audit trails and compliance reporting, and seek to eliminate manual data entry errors and duplicate payments. The Problem It Solves Manual invoice processing creates significant operational challenges including data entry errors (3-5% error rate), processing delays (8-10 days per invoice), duplicate payments (0.1-0.5% of invoices), approval bottlenecks causing late fees, lack of visibility into pending invoices and cash commitments, and compliance issues from missing audit trails. This workflow reduces processing time by 80%, eliminates data entry errors, prevents duplicate payments, and provides complete visibility into your payables process. Setup Instructions Google Drive Setup: Create dedicated folders for invoice intake and configure access permissions PDF Vector Configuration: Set up API credentials with appropriate rate limits for your volume Database Setup: Deploy the provided schema for vendor master and invoice tracking tables Email Integration: Configure IMAP credentials for invoice email monitoring (optional) ERP Connection: Set up API access to your accounting system (QuickBooks, SAP, etc.) Approval Rules: Define approval thresholds and routing rules in the configuration node Notification Setup: Configure Slack/email for approval notifications and alerts Key Features Multi-Channel Invoice Ingestion**: Automatically collect invoices from Google Drive, email attachments, and API uploads Advanced OCR and AI Extraction**: Process any invoice format including handwritten notes and poor quality scans Vendor Master Integration**: Validate and enrich vendor data, maintaining a clean vendor database 3-Way Matching**: Automatically match invoices to purchase orders and goods receipts Dynamic Approval Routing**: Route based on amount, department, vendor, or custom rules Duplicate Detection**: Prevent duplicate payments using fuzzy matching algorithms Real-Time Analytics**: Track KPIs like processing time, approval delays, and early payment discounts Exception Handling**: Intelligent routing of problematic invoices for manual review Audit Trail**: Complete tracking of all actions, approvals, and system modifications Payment Scheduling**: Optimize payment timing to capture discounts and manage cash flow Customization Options This workflow can be customized to add industry-specific extraction fields, implement GL coding rules based on vendor or amount, create department-specific approval workflows, add currency conversion for international invoices, integrate with additional systems (banks, expense management), configure custom dashboards and reporting, set up vendor portals for invoice status inquiries, and implement machine learning for automatic GL coding suggestions. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by David Olusola
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. WordPress to Blotato Social Publisher Overview: This automation monitors your WordPress site for new posts and automatically creates platform-specific social media content using AI, then posts to Twitter, LinkedIn, and Facebook via Blotato. What it does: Monitors WordPress site for new posts every 30 minutes Filters posts published in the last hour to avoid duplicates Processes each new post individually AI generates optimized content for each social platform (Twitter, LinkedIn, Facebook) Extracts platform-specific content from AI response Publishes to all three social media platforms via Blotato API Setup Required: WordPress Connection Configure WordPress credentials in the "Check New Posts" node Enter your WordPress site URL, username, and password/app password Blotato Social Media API Setup Get your Blotato API key from your Blotato account Configure API credentials in the Blotato connection node Map each platform (Twitter, LinkedIn, Facebook) to the correct Blotato channel AI Configuration Set up Google Gemini API credentials Connect the Gemini model to the "AI Social Content Creator" node Customization Options Posting Frequency: Modify schedule trigger (default: every 30 minutes) Content Tone: Adjust AI system message for different writing styles Post Filtering: Change time window in WordPress node (default: last hour) Platform Selection: Remove any social media platforms you don’t want to use Testing Run workflow manually to test connections Verify posts appear correctly on all platforms Monitor for API rate limit issues Features: Platform-optimized content (hashtags, character limits, professional tone) Duplicate prevention system Batch processing for multiple posts Featured image support Customizable posting frequency Customization: Change monitoring frequency Adjust AI prompts for different tones Add/remove social platforms Modify hashtag strategies Need Help? For n8n coaching or one-on-one consultation
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Mirai
Icebreaker Generator powered with ChatGPT This n8n template crawls a company website, distills the content with AI, and produces a short, personalized icebreaker you can drop straight into your cold emails or CRM. Perfect for SDRs, founders, and agencies who want “real research” at scale. Good to know Works from a Google Sheet of leads (domain + LinkedIn, etc.). Handles common scrape failures gracefully and marks the lead’s Status as Error. Uses ChatGPT to summarize pages and craft one concise, non-generic opener. Output is written back to the same Google Sheet (IceBreaker, Status). You’ll need Google credentials (for Sheets) and OpenAI credentials (for GPT). How it works Step 1 — Discover internal pages Reads a lead’s website from Google Sheets. Scrapes the home page and extracts all links. A Code node cleans the list (removes emails/anchors/social/external domains, normalizes paths, de-duplicates) and returns unique internal URLs. If the home page is unreachable or no links are found, the lead is marked Error and the workflow moves on. Step 2 — Convert pages to text Visits each collected URL and converts the response into HTML/Markdown text for analysis. You can cap depth/amount with the Limit node. Step 3 — Summarize & generate the icebreaker A GPT node produces a two-paragraph abstract for each page (JSON output). An Aggregate node merges all abstracts for the company. Another GPT node turns the merged summary into a personalized, multi-line icebreaker (spartan tone, non-obvious details). The result is written back to Google Sheets (IceBreaker = ..., Status = Done). The workflow loops to the next lead. How to use Prepare your sheet Include at least: organization_website_url, linkedin_url, and any other lead fields you track. Keep an empty IceBreaker and Status column for the workflow to fill. Connect credentials Google Sheets: use the Google account that owns the sheet and link it in the nodes. OpenAI: add your API key to the GPT nodes (“Summarize Website Page”, “Generate Multiline Icebreaker”). Run the workflow Start with the Manual Trigger (or replace with a schedule/webhook). Adjust Limit if you want fewer/more pages per company. Watch Status (Done/Error) and IceBreaker populate in your sheet. Requirements n8n instance Google Sheets account & access to the leads sheet OpenAI API key (for summarization + icebreaker generation) Customizing this workflow Tone & format: tweak the prompts (both GPT nodes) to match your brand voice and structure. Depth: change the Limit node to scan more/less pages; add simple rules to prioritize certain paths (e.g., /about, /blog/*). Fields: write additional outputs (e.g., Company Summary, Key Products, Recent News) back to new sheet columns. Lead selection: filter rows by Status = "" (or custom flags) to only process untouched leads. Error handling: expand the Error branch to retry with www./HTTP→HTTPS or to log diagnostics in a separate tab. Tips Keep icebreakers short, specific, and free of clichés—small, non-obvious details from the site convert best. Start with a small batch to validate quality, then scale up. Consider adding a rate limit if target sites throttle requests. In short: Sheet → crawl internal pages → AI abstracts → single tailored icebreaker → write back to the sheet, then repeat for the next lead. This automation can work great with our automation for automated cold emailing.
by Shahzaib Anwar
📌 Overview This workflow automatically processes incoming Shopify/Gmail leads and pushes them into HubSpot as both Contacts and Deals. It helps sales and marketing teams capture leads instantly, enrich CRM data, and avoid missed opportunities. ⚡ How it works Trigger: Watches for new emails in Gmail. Extract Data: Parses email body (Name, Email, City, Phone, Message, Product URL/Title). Condition: Checks if sender is Shopify before processing. HubSpot: Creates/updates a Contact with customer details. Creates a Deal associated with that contact. 🎯 Benefits 📥 Automates lead capture → CRM 🚫 Eliminates manual copy-paste from Gmail 🔄 Real-time sync between Gmail and HubSpot 📈 Improves sales follow-up speed and accuracy 🛠 Setup Steps Import this workflow into your n8n instance. Connect your Gmail and HubSpot credentials. Replace the HubSpot Deal Stage ID with your own pipeline stage. (Optional) Adjust the Code Node regex if your email format differs. Activate the workflow and test with a sample lead email. 📝 Example Email Format Name: John Doe Email: john@example.com City: London Phone: +44 7000 000000 Body: Interested in product Product Url: https://example.com/product Product Title: Sample Product sticky_notes: name: Gmail Trigger note: > 📧 Watches for new emails in Gmail. Polls every minute and passes email data into the flow. name: Get a Message note: > 📩 Fetches the full Gmail message content (body + metadata) for parsing. name: Extract From Email note: > 🔍 Extracts the sender’s email address from Gmail to identify the source. name: If Sender is Shopify note: > ✅ Condition node that ensures only Shopify-originated emails/leads are processed. name: Code Node (Regex Parser) note: > 🧾 Parses the email body using regex to extract Name, Email, City, Phone, Message, Product URL, and Title. name: Edit Fields (Set Node) note: > 📝 Cleans and structures the extracted fields into proper JSON format before sending to HubSpot. name: HubSpot → Create/Update Contact note: > 👤 Creates or updates a HubSpot Contact with the extracted lead details. name: HubSpot → Create Deal note: > 💼 Creates a HubSpot Deal linked to the Contact, including campaign/product information.
by Khairul Muhtadin
Decodo Amazon Product Recommender delivers instant, AI-powered shopping recommendations directly through Telegram. Send any product name and receive Amazon product analysis featuring price comparisons, ratings, sales data, and categorized recommendations (budget, premium, best value) in under 40 seconds—eliminating hours of manual research. Why Use This Workflow? Time Savings: Reduce product research from 45+ minutes to under 30 seconds Decision Quality: Compare 20+ products automatically with AI-curated recommendations Zero Manual Work: Complete automation from message input to formatted recommendations Ideal For E-commerce Entrepreneurs:** Quickly research competitor products, pricing strategies, and market trends for inventory decisions Smart Shoppers & Deal Hunters:** Get instant product comparisons with sales volume data and discount tracking before purchasing Product Managers & Researchers:** Analyze Amazon marketplace positioning, customer sentiment, and pricing ranges for competitive intelligence How It Works Trigger: User sends product name via Telegram (e.g., "iPhone 15 Pro Max case") AI Validation: Gemini 2.5 Flash extracts core product keywords and validates input authenticity Data Collection: Decodo API scrapes Amazon search results, extracting prices, ratings, reviews, sales volume, and product URLs Processing: JavaScript node cleans data, removes duplicates, calculates value scores, and categorizes products (top picks, budget, premium, best value, most popular) Intelligence Layer: AI generates personalized recommendations with Telegram-optimized markdown formatting, shortened product names, and clean Amazon URLs Output & Delivery: Formatted recommendations sent to user with categorized options and direct purchase links Error Handling: Admin notifications via separate Telegram channel for workflow monitoring Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Decodo Account | Essential | Amazon product data scraping | | Telegram Bot Token | Essential | Chat interface for user interactions | | Google Gemini API | Essential | AI-powered product validation and recommendations | | Telegram Account | Optional | Admin error notifications | Installation Steps Import the JSON file to your n8n instance Configure credentials: Decodo API: Sign up at decodo.com → Dashboard → Scraping APIs → Web Advanced → Copy BASIC AUTH TOKEN Telegram Bot: Message @BotFather on Telegram → /newbot → Copy HTTP API token (format: 123456789:ABCdefGHI...) Google Gemini: Obtain API key from Google AI Studio for Gemini 2.5 Flash model Update environment-specific values: Replace YOUR-CHAT-ID in "Notify Admin" node with your Telegram chat ID for error notifications Verify Telegram webhook IDs are properly configured Customize settings: Adjust AI prompt in "Generate Recommendations" node for different output formats Set character limits (default: 2500) for Telegram message length Test execution: Send test message to your Telegram bot: "iPhone 15 Pro" Verify processing status messages appear Confirm recommendations arrive with properly formatted links Customization Options Basic Adjustments: Character Limit**: Modify 2500 in AI prompt to adjust response length (Telegram max: 4096) Advanced Enhancements: Multi-language Support**: Add language detection and translation nodes for international users Price Tracking**: Integrate Google Sheets to log historical prices and trigger alerts on drops Image Support**: Enable Telegram photo messages with product images from scraping results Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "No product detected" for valid inputs | AI validation too strict or ambiguous query | Add specific product details (model number, brand) in user input | | Empty recommendations returned | Decodo API rate limit or Amazon blocking | Wait 60 seconds between requests; verify Decodo account status | | Telegram message formatting broken | Special characters in product names | Ensure Telegram markdown mode is set to "Markdown" (legacy) not "MarkdownV2" | Use Case Examples Scenario 1: E-commerce Store Owner Challenge: Needs to quickly assess competitor pricing and product positioning for new inventory decisions without spending hours browsing Amazon Solution: Sends "wireless earbuds" to bot, receives categorized analysis of 20+ products with price ranges ($15-$250), top sellers, and discount opportunities Result: Identifies $35-$50 price gap in market, sources comparable product, achieves 40% profit margin Scenario 2: Smart Shopping Enthusiast Challenge: Wants to buy a laptop backpack but overwhelmed by 200+ Amazon options with varying prices and unclear value propositions Solution: Messages "laptop backpack" to bot, gets AI recommendations sorted by budget ($30), premium ($50+), best value (highest discount + good ratings), and most popular (by sales volume) Result: Purchases "Best Value" recommendation with 35% discount, saves $18 and 45 minutes of research time Created by: Khaisa Studio Category: AI | Productivity | E-commerce | Tags: amazon, telegram, ai, product-research, shopping, automation, gemini Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Ranjan Dailata
Who this is for This workflow is designed for: Automation engineers building AI-powered data pipelines Product managers & analysts needing structured insights from web pages Researchers & content teams extracting summaries from documentation or articles HR, compliance, and knowledge teams converting unstructured web content into structured records n8n self-hosted users leveraging advanced scraping and LLM enrichment It is ideal for anyone who wants to transform any public URL into structured data + clean summaries automatically. What problem this workflow solves Web content is often unstructured, verbose, and inconsistent, making it difficult to: Extract structured fields reliably Generate consistent summaries Reuse data across spreadsheets, dashboards, or databases Eliminate manual copy-paste and interpretation This workflow solves the problem of turning arbitrary web pages into machine-readable JSON and human-readable summaries, without custom scrapers or manual parsing logic. What this workflow does The workflow integrates Decodo, Google Gemini, and Google Sheets to perform automated extraction of structured data. Here’s how it works step-by-step: Input Setup The workflow begins when the user executes it manually or passes a valid URL. The input includes url. Profile Extraction with Decodo Accepts any valid URL as input Scrapes the page content using Decodo Uses Google Gemini to: Extract structured data in JSON format Generate a concise, factual summary Cleans and parses AI-generated JSON safely Merges structured data and summary output Stores the final result in Google Sheets for reporting or downstream automation JSON Parsing & Merging The Code Node cleans and parses the JSON output from the AI for reliable downstream use. The Merge Node combines both structured data and the AI-generated summary. Data Storage in Google Sheets The Google Sheets Node appends or updates the record, storing the structured JSON and summary into a connected spreadsheet. End Output A unified, machine-readable data in JSON + an executive-level summary suitable data analysis or downstream automation. Setup Instructions Prerequisites n8n account** with workflow editor access Decodo API credentials** - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard Google Gemini (PaLM) API access** Google Sheets OAuth credentials** Setup Steps Import the workflow into your n8n instance. Configure Credentials Add your Decodo API credentials in the Decodo node. Connect your Google Gemini (PaLM) credentials for both AI nodes. Authenticate your Google Sheets account. Edit Input Node In the Set the Input Fields node, replace the default URL with your desired profile or dynamic data source. Run the Workflow Trigger manually or via webhook integration for automation. Verify that structured profile data and summary are written to the linked Google Sheet. How to customize this workflow to your needs You can easily extend or adapt this workflow: Modify Structured Output Change the Gemini extraction prompt to match your own JSON schema Add required fields such as authors, dates, entities, or metadata Improve Summarization Adjust summary length or tone (technical, executive, simplified) Add multi-language summarization using Gemini Change Output Destination Replace Google Sheets with: Databases (Postgres, MySQL) Notion Slack / Email File storage (JSON, CSV) Add Validation or Filtering Insert IF nodes to: Reject incomplete data Detect errors or hallucinated output Trigger alerts for malformed JSON Scale the Workflow Replace manual trigger with: Webhook Scheduled trigger Batch URL processing Summary This workflow provides a powerful, generic solution for converting unstructured web pages into structured, AI-enriched datasets. By combining Decodo for scraping, Google Gemini for intelligence, and Google Sheets for persistence, it enables repeatable, scalable, and production-ready data extraction without custom scrapers or brittle parsing logic.
by Trung Tran
Automated AWS IAM Compliance Workflow for MFA Enforcement and Access Key Deactivation > This workflow leverages AWS IAM APIs and n8n automation to ensure strict security compliance by continuously monitoring IAM users for MFA (Multi-Factor Authentication) enforcement. .jpg) Who’s it for This workflow is designed for DevOps, Security, or Cloud Engineers responsible for maintaining IAM security compliance in AWS accounts. It's ideal for teams who want to enforce MFA usage and automatically disable access for non-compliant IAM users. How it works / What it does This automated workflow performs a daily check to detect IAM users without an MFA device and deactivate their access keys. Step-by-step: Daily scheduler: Triggers the workflow once a day. Get many users: Retrieves a list of all IAM users in the account. Get IAM User MFA Devices: Calls AWS API to get MFA device info for each user. Filter out IAM users with MFA: Keeps only users without any MFA device. Send warning message(s): Sends Slack alerts for users who do not have MFA enabled. Get User Access Key(s): Fetches access keys for each non-MFA user. Parse the list of user access key(s): Extracts and flattens key information like AccessKeyId, Status, and UserName. Filter out inactive keys: Keeps only active access keys for further action. Deactivate Access Key(s): Calls AWS API to deactivate each active key for non-MFA users. How to set up Configure AWS credentials in your environment (IAM role or AWS access key with required permissions). Connect Slack via the Slack node for alerting (set channel and credentials). Set the scheduler to your preferred frequency (e.g., daily at 9AM). Adjust any Slack message template or filtering conditions as needed. Requirements IAM user or role credentials with the following AWS IAM permissions: iam:ListUsers iam:ListMFADevices iam:ListAccessKeys iam:UpdateAccessKey Slack credentials (Bot token with chat:write permission). n8n environment with: Slack integration AWS credentials (set via environment or credentials manager) How to customize the workflow Alert threshold**: Instead of immediate deactivation, you can delay action (e.g., alert first, wait 24h, then disable). Change notification channel**: Modify the Slack node to send alerts to a different channel or add email integration. Whitelist exceptions**: Add a Set or IF node to exclude specific usernames (e.g., service accounts). Add audit logging**: Use Google Sheets, Airtable, or a database to log which users were flagged or had access disabled. Extend access checks**: Include console password check (GetLoginProfile) if needed.