by Olivier
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This template generates structured synthetic company content using live web data from the Bedrijfsdata.nl API combined with an LLM. Provide a company domain (directly or via a Bedrijfsdata.nl ID) and the workflow retrieves relevant website and search engine content, then produces ready-to-use descriptions of the company, its offerings, and its target audience. ✨ Features Create high-quality Dutch-language company descriptions on demand Automatically pull live web content via Bedrijfsdata.nl RAG Domain & RAG Search Structured JSON output for consistent downstream use (e.g., CRM updates, lead qualification) Flexible trigger: run from ProspectPro ID, domain input, or another workflow Secure, modular, and extendable structure (error handling included) 🏢 Example Output The workflow produces structured content fields you can directly use in your sales, marketing, or enrichment flows: company_description** – 1-2 paragraph summary of the company products_and_services** – detailed overview of offerings target_audience** – specific characteristics of ideal customers (e.g., industry, location, company size, software usage) Example: { "company_description": "Bedrijfsdata.nl B.V. is een Nederlands bedrijf dat uitgebreide data levert over meer dan 3,7 miljoen bedrijven in Nederland...", "products_and_services": "Het bedrijf biedt API-toegang tot bedrijfsprofielen, sectoranalyses, en SEO-gegevens...", "target_audience": "Nederlandse MKB's die behoefte hebben aan actuele bedrijfsinformatie voor marketing- of salesdoeleinden..." } ⚙ Requirements n8n instance or cloud workspace Install the Bedrijfsdata.nl n8n Verified Community Node OpenAI API credentials (tested with gpt-4.1-mini and gpt-3.5-turbo) Bedrijfsdata.nl developer account (14-day free trial, 500 credits) 🔧 Setup Instructions 1. Trigger configuration Use Bedrijfsdata.nl ID (default) or provide a domain directly Can be called from another workflow using “Execute Workflow” 2. Configure API credentials Bedrijfsdata.nl API key OpenAI API key 3. Customize Output (Optional) Adjust prompt in the LLM node to create other types of synthetic content Extend structured output schema for your use case 4. Integrate with Your Stack Example node included to update HubSpot descriptions Replace or extend to match your CRM, database, or messaging tools 🔐 Security Notes Input validation for required domain Dedicated error branches for invalid input, API errors, LLM errors, and downstream integration errors RAG content checks before running the LLM 🧪 Testing Run workflow with a Bedrijfsdata.nl ID linked to a company with a known website Review generated JSON output Verify content accuracy before production use 📌 About Bedrijfsdata.nl Bedrijfsdata.nl operates the most comprehensive company database in the Netherlands. With real-time data on 3.7M+ businesses and AI-ready APIs, we help Dutch SMEs enrich their CRM, workflows, and marketing automation. Built on 25+ years of experience in data collection and enrichment, our technology brings corporate-grade data quality to every organisation. Website: https://www.bedrijfsdata.nl Developers: https://developers.bedrijfsdata.nl API docs: https://docs.bedrijfsdata.nl 📞 Support Email: klantenservice@bedrijfsdata.nl Phone: +31 20 789 50 50 Support hours: Monday–Friday, 09:00–17:00 CET
by Daniel Shashko
How it Works This workflow automates competitive price intelligence using Bright Data's enterprise web scraping API. On a scheduled basis (default: daily at 9 AM), the system loops through configured competitor product URLs, triggers Bright Data's web scraper to extract real-time pricing data from each site, and intelligently compares competitor prices against your current pricing. The workflow handles the full scraping lifecycle: it sends scraping requests to Bright Data, waits for completion, fetches the scraped product data, and parses prices from various formats and website structures. All pricing data is automatically logged to Google Sheets for historical tracking and trend analysis. When a competitor's price drops below yours by more than the configured threshold (e.g., 10% cheaper), the system immediately sends detailed alerts via Slack and email to your pricing team with actionable intelligence. At the end of each monitoring run, the workflow generates a comprehensive daily summary report that aggregates all competitor data, calculates average price differences, identifies the lowest and highest competitors, and provides a complete competitive landscape view. This eliminates hours of manual competitor research and enables data-driven pricing decisions in real-time. Who is this for? E-commerce businesses and online retailers needing automated competitive price monitoring Product managers and pricing strategists requiring real-time competitive intelligence Revenue operations teams managing dynamic pricing strategies across multiple products Marketplaces competing in price-sensitive categories where margins matter Any business that needs to track competitor pricing without manual daily checks Setup Steps Setup time: Approx. 30-40 minutes (Bright Data configuration, credential setup, competitor URL configuration) Requirements: Bright Data account with Web Scraper API access Bright Data API token (from dashboard) Google account with a spreadsheet for price tracking Slack workspace with pricing channels SMTP email provider for alerts Sign up for Bright Data and create a web scraping dataset (use e-commerce template for product data) Obtain your Bright Data API token and dataset ID from the dashboard Configure these nodes: Schedule Daily Check: Set monitoring frequency using cron expression (default: 9 AM daily) Load Competitor URLs: Add competitor product URLs array, configure your current price, set alert threshold percentage Loop Through Competitors: Automatically handles multiple URLs (no configuration needed) Scrape with Bright Data: Add Bright Data
by Zakwan
📖 Overview This template automates the process of researching a keyword, scraping top-ranking articles, cleaning their content, and generating a high-quality SEO-optimized blog post. It uses Google Search via RapidAPI, Ollama with Mistral AI, and Google Drive to deliver an end-to-end automated content workflow. Ideal for content creators, SEO specialists, bloggers, and marketers who need to quickly gather and summarize insights from multiple sources to create superior content. ⚙️ Prerequisites Before using this workflow, make sure you have: n8n installed (Desktop, Docker, or Cloud). Ollama installed with the mistral:7b model: ollama pull mistral:7b RapidAPI account (for Google Search API). Google Drive account (with a target folder where articles will be saved). 🔑 Credentials Required RapidAPI (Google Search API) Header authentication with your API key. Example headers: x-rapidapi-key: YOUR_API_KEY x-rapidapi-host: google-search74.p.rapidapi.com Google Drive OAuth2 Allow read/write permissions. Update the folderId with your Drive folder where articles should be stored. Ollama API Base URL: http://localhost:11434 (local n8n) http://host.docker.internal:11434 (inside Docker) Ensure the mistral:7b model is available. 🚀 Setup Instructions Configure RapidAPI Sign up at RapidAPI . Subscribe to the Google Search API. Create an HTTP Header Auth credential in n8n with your API key. Configure Google Drive In n8n, add a Google Drive OAuth2 credential. Select the Drive folder ID where output files should be saved. Configure Ollama Install Ollama locally. Pull the required model (mistral:7b). Create an Ollama API credential in n8n. Run the Workflow Trigger by sending a chat message with your target keyword. The workflow searches Google, extracts the top 3 results, scrapes the articles, cleans the content, and generates a structured blog post. Final output is stored in Google Drive as a .docx file. 🎨 Customization Options Search Engine → Swap out RapidAPI with Bing or SerpAPI. Number of Articles → Change limit: 3 in the Google Search node. Content Cleaning → Modify the regex in the “Clean Body Text” node to capture or tags. AI Model → Replace mistral:7b with llama3, mixtral, or any other Ollama-supported model. Storage → Save output to a different Google Drive folder or export to Notion/Slack. 📌 Workflow Highlights Google Search (RapidAPI) → Fetch top 3 results for your keyword. HTTP Request + Code Nodes → Extract and clean article body text. Mistral AI via Ollama → Summarize, optimize, and refine the content. Google Drive → Save the final blog-ready article automatically.
by Cheng Siong Chin
Introduction Generates complete scientific papers from title and abstract using AI. Designed for researchers, automating literature search, content generation, and citation formatting. How It Works Extracts input, searches academic databases (CrossRef, Semantic Scholar, OpenAlex), merges sources, processes citations, generates AI sections (Introduction, Literature Review, Methodology, Results, Discussion, Conclusion), compiles document. Workflow Template Webhook → Extract Data → Search (CrossRef + Semantic Scholar + OpenAlex) → Merge Sources → Process References → Prepare Context → AI Generate (Introduction + Literature Review + Methodology + Results + Discussion + Conclusion via OpenAI) → Merge Sections → Compile Document Workflow Steps Input & Search: Webhook receives title/abstract; searches CrossRef, Semantic Scholar, OpenAlex; merges and processes references AI Generation: OpenAI generates six sections with in-text citations using retrieved references Assembly: Merges sections; compiles formatted document with reference list Setup Instructions Trigger & APIs: Configure webhook URL; add OpenAI API key; customize prompts Databases: Set up CrossRef, Semantic Scholar, OpenAlex API access; configure search parameters Prerequisites OpenAI API, CrossRef API, Semantic Scholar API, OpenAlex API, webhook platform, n8n instance Customization Adjust reference limits, modify prompts for research fields, add citation styles (APA/IEEE), integrate databases (PubMed, arXiv), customize outputs (DOCX/LaTeX/PDF) Benefits Automates paper drafting, comprehensive literature integration, proper citations
by 飯盛 正幹
Description This workflow automates the process of finding new content ideas by scraping trending news and social media posts, analyzing them with AI, and delivering a summarized report to Slack. It is perfect for content marketers, social media managers, and strategists who spend hours researching trending topics manually. Who is this for Content Marketers: To discover trending topics for blogs or newsletters. Social Media Managers: To keep up with competitor activity or industry news. Market Researchers: To monitor specific keywords or brands. How it works Schedule: The workflow runs automatically on a weekly schedule (default is Monday morning). Data Collection: It uses Apify to scrape the latest news from Google Search and recent posts from specific Facebook pages. Data Processing: The results are merged, and the top 5 most relevant items are selected to prevent information overload. AI Analysis: An AI Agent (powered by OpenRouter/LLM) analyzes each article to classify it into a theme (e.g., Marketing, Technology, Strategy) and extracts 3 catchy keywords. Notification: The analyzed insights, including the theme, keywords, summary, and original URL, are formatted and sent directly to Slack. Requirements Apify Account: You need an API token and access to the Google Search Results Scraper and Facebook Posts Scraper actors. OpenRouter API Key: Used to power the AI analysis (can be swapped for OpenAI/Anthropic if preferred). Slack Account: To receive the notifications. How to set up Configure Credentials: Open the Workflow Configuration node and paste your Apify API Token and OpenRouter API Key. Connect your Slack account in the Slack node. Adjust Apify Settings: In the Apify Google news node, change the search query (currently set to "Top News" in Japanese) to your desired topic. In the Apify Facebook node, update the startUrls to the Facebook pages you want to monitor. Customize AI Prompt: (Optional) Open the AI Agent node to adjust the language or the specific themes you want the AI to classify. How to customize Change the LLM: Replace the OpenRouter model with the OpenAI or Anthropic Chat Model node if you prefer those providers. Increase Data Volume: Adjust the "Limit 5 items" Code node to process more articles at once (mind your API usage limits). Change Destination: Replace the Slack node with Notion, Google Sheets, or Email to save the ideas elsewhere. ⚠️ Crucial Checklist Before Submission The n8n team will reject templates that contain non-English text in the nodes. Please apply these changes to your workflow in the n8n editor before exporting the JSON for submission: Rename Nodes to English: Cron トリガー → Schedule Trigger Function: Slackメッセージ作成 → Format Slack Message Slack: 企画ネタ投稿 → Slack Post Function: LLMレスポンス整形 → Parse LLM Response Merge Data (統合) → Merge Data Function: データ抽出・5件制限 → Limit to 5 Items Function: Googleデータ抽出 → Extract Google Data Function: Facebookデータ抽出 → Extract FB Data Translate Code Comments & Prompts: Inside the Code nodes, ensure comments are in English (e.g., // Slackへの投稿メッセージを作成します → // Create message for Slack). Inside the AI Agent node, translate the System Prompt into English (e.g., "You are a professional content planner..." instead of "あなたはプロの..."). Even if you want the output in Japanese, the template default should usually be English, or clearly labeled as a Japanese template. Add the Mandatory Sticky Note: Add a Yellow Sticky Note to the canvas. Paste the "Description" text (from step 2 above) into this sticky note. Place it clearly next to the start of the workflow. Remove Hardcoded IDs: Ensure MASKED_USER_ID and MASKED_WEBHOOK_ID are cleared out or set to expressions that reference the user's setup.
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. How It Works This workflow connects an MCP Server with the KlickTipp contact management platform and integrates it with an LLM (e.g. Claude etc.) to enable intelligent querying and segmentation of contact data. It covers all major KlickTipp API endpoints, providing a comprehensive toolkit for automated contact handling and campaign targeting. Key Features MCP Server Trigger: Initiates the workflow via the MCP server, listening for incoming requests related to contact queries or segmentation actions. LLM Interaction Setup: Interacts with an OpenAI or Claude model to handle natural language queries such as contact lookups, tagging, and segmentation tasks. KlickTipp Integration: Complete set of KlickTipp API endpoints included: Contact Management: Add, update, get, list, delete, and unsubscribe contacts. Contact Tagging: Tag, untag, list tagged contacts. Tag Operations: Create, get, update, delete, list tags. Opt-In Processes: List and retrieve opt-in process details. Data Fields: List and get custom data fields. Redirects: Retrieve redirect URLs. Use Cases Supported: Query contact information via email or name. Identify and segment contacts by city, region, or behavior. Create or update contacts from the provided data. Dynamically apply or remove tags to initiate campaigns. Automate targeted outreach based on contact attributes. Setup Instructions Install and Configure Nodes: Set up MCP Server. Configure the LLM connection (e.g., Claude Desktop configuration). Add and authenticate all KlickTipp nodes using valid API credentials. Define Tagging and Field Mapping: Identify which fields and tags are relevant to your use cases. Ensure necessary tags and custom fields are already created in KlickTipp. Workflow Logic: Trigger via MCP Server: A prompt or webhook call activates the server listener. Query Handling via LLM Agent: AI interprets the natural language input and determines the action. Contact Search & Segmentation: Searches contacts using identifiers (email, address) or criteria. Data Operations: Retrieves, updates, or manages contact and tag data based on interpreted command. Campaign Preparation: Applies tags or sends campaign triggers depending on query results. Benefits: AI-Powered Automation:** Reduces manual contact search and tagging efforts through intelligent processing. Scalable Integration:** Built-in support for full range of KlickTipp operations allows diverse use-case handling. Data Consistency:** Ensures structured data flows between MCP, AI, and KlickTipp, minimizing errors. Testing and Deployment: Use defined prompts such as: “Tell me something about the contact with email address X” “Tag all contacts from region Y” “Send campaign Z to customers in area A” Validate expected actions in KlickTipp after prompt execution. Notes: Customization:** Adjust tag logic, AI prompts, and contact field mappings based on project needs. Extensibility:** The template can be expanded with further logic for Google Sheets input or campaign feedback loops Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by A Z
⚡ Quick Setup Import this workflow into your n8n instance. Add your Apify, Google Sheets, and Firecrawl credentials. Activate the workflow to start your automated lead enrichment system. Copy the webhook URL from the MCP trigger node. Connect AI agents using the MCP URL. 🔧 How it Works This solution combines two powerful workflows to deliver fully enriched, AI-ready business leads from Google Maps: Apify Google Maps Scraper Node: Collects business data and, if enabled, enriches each lead with contact details and social profiles. Leads Missing Enrichment: Any leads without contact or social info are automatically saved to a Google Sheet. Firecrawl & Code Node Workflow: A second workflow monitors the Google Sheet, crawls each business’s website using Firecrawl, and extracts additional social media profiles or contact info using a Code node. Personalization Logic: AI-powered nodes generate tailored outreach content for each enriched lead. Native Integration: The entire process is exposed as an MCP-compatible interface, returning enriched and personalized lead data directly to the AI agent. 📋 Available Operations Business Search: Find businesses on Google Maps by location, category, or keyword. Lead Enrichment: Automatically append contact details, social profiles, and other business info using Apify and Firecrawl. Personalized Outreach Generation: Create custom messages or emails for each lead. Batch Processing: Handle multiple leads in a single request. Status & Error Reporting: Get real-time feedback on processing, enrichment, and crawling. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: Search queries (location, keywords, categories) Enrichment options (contact, social, etc.) Personalization variables (name, business type, etc.) Response Format: Returns fully enriched lead data and personalized outreach content in a structured format.
by Khairul Muhtadin
This automated TLDW (Too Long; Didn't Watch) generator using Decodo's scraping API to extract complete video transcripts and metadata, then uses Google Gemini 3 to create intelligent summaries with key points, chapters breakdown, tools mentioned, and actionable takeaways—eliminating hours of manual note-taking and video watching. Why Use This Workflow? Time Savings: Convert a 2-hour video into a readable 5-minute summary, reducing research time by 95% Comprehensive Coverage: Captures key points, chapters, tools, quotes, and actionable steps that manual notes often miss Instant Accessibility: Receive structured summaries directly in Telegram within 30-60 seconds of sharing a link Multi-Language Support: Process transcripts in multiple languages supported by YouTube's auto-caption system Ideal For Content Creators & Researchers:** Quickly extract insights from competitor videos, educational content, or industry talks without watching hours of footage Students & Educators:** Generate study notes from lecture recordings, online courses, or tutorial videos with chapter-based breakdowns Marketing Teams:** Analyze competitor content strategies, extract tools and techniques mentioned, and identify trending topics across multiple videos Busy Professionals:** Stay updated with conference talks, webinars, or industry updates by reading summaries instead of watching full recordings How It Works Trigger: User sends any YouTube URL (youtube.com or youtu.be) to a configured Telegram bot Data Collection: Workflow extracts video ID and simultaneously fetches full transcript and metadata (title, channel, views, duration, chapters, tags) via Decodo API Processing: Raw transcript data is extracted and cleaned, while metadata is parsed into structured fields including formatted statistics and chapter timestamps AI Processing: Google Gemini Flash analyzes the transcript to generate a structured summary covering one-line overview, key points, main topics by chapter, tools mentioned, target audience, practical takeaways, and notable quotes Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Telegram Bot API | Essential | Receives video links and delivers summaries | | Decodo Scraper API | Essential | Extracts YouTube transcripts and metadata | | Google Gemini API | Essential | AI-powered summary generation | Installation Steps Import the JSON file to your n8n instance Configure credentials: Telegram Bot API: Create a bot via @BotFather on Telegram, obtain the API token, and configure in n8n Telegram credentials Decodo API: Sign up at Decodo Dashboard, get your API key, create HTTP Header Auth credential with header name "Authorization" and value "Basic [YOUR_API_KEY]" Google Gemini API: Obtain API key from Google AI Studio, configure in n8n Google Palm API credentials Update environment-specific values: In the "Alert Admin" node, replace YOUR_CHAT_ID with your personal Telegram user ID for error notifications Optionally adjust the languageCode in "Set: Video ID & Config" node (default: "en") Customize settings: Modify the AI prompt in "Generate TLDR" node to adjust summary structure and depth Test execution: Send a YouTube link to your Telegram bot Verify you receive the "Processing..." notification, video info card, and formatted summary chunks Technical Details Workflow Logic The workflow employs parallel processing for efficiency. Transcript and metadata are fetched simultaneously after video ID extraction. Once both API calls complete, the transcript feeds directly into Gemini AI while metadata is parsed separately. The merge node combines AI output with structured metadata before splitting into Telegram-friendly chunks. Error handling is isolated on a separate branch triggered by any node failure, formatting error details and alerting admins without disrupting the main flow. Customization Options Basic Adjustments: Language Selection**: Change languageCode from "en" to "id", "es", "fr", etc. to fetch transcripts in different languages (YouTube must have captions available) Summary Style**: Edit the prompt in "Generate TLDR" to focus on specific aspects (e.g., "focus only on technical tools mentioned" or "create a summary for beginners") Message Length**: Adjust maxCharsPerChunk (currently 4000) to create longer or shorter message splits based on preference Advanced Enhancements: Database Storage**: Add a Postgres/Airtable node after "Merge: Data + Summary" to archive all summaries with timestamps and user IDs for searchable knowledge base (medium complexity) Multi-Model Comparison**: Duplicate the "Generate TLDR" chain and connect GPT-4 or Claude, merge results to show different AI perspectives on the same video (high complexity) Auto-Translation**: Insert a translation node after summary generation to deliver summaries in user's preferred language automatically (medium complexity) Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "Not a YouTube URL" error | URL format not recognized | Ensure UR sent contains youtube.com or youtu.be | | No transcript available | Video lacks captions or wrong language | Check video has auto-generated or manual captions change languageCode to match available options | | Decodo API 401/403 error | Invalid or expired API key | Verify API key in HTTP Header Auth credential. regenerate if needed from Decodo dashboard || | Error notifications not received | Wrong chat ID in Alert Admin node | Get your Telegram user ID from @userinfobot and update the node | Use Case Examples Scenario 1: Marketing Agency Competitive Analysis Challenge: Agency needs to analyze 50+ competitor YouTube videos monthly to identify content strategies, tools used, and messaging angles—watching all videos would require 80+ hours Solution: Drop youtube links into a shared Telegram group with the bot. Summaries are generated instantly, highlighting tools mentioned, key talking points, and target audience insights Result: Research time reduced from 80 hours to 6 hours monthly (93% time savings), with searchable archive of all competitor content strategies Created by: Khaisa Studio Category: AI-Powered Automation Tags: YouTube, AI, Telegram, Summarization, Content Analysis, Decodo, Gemini Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by n8n Team
This workflow adds new HubSpot contacts to the Mailchimp email list. Prerequisites HubSpot account and HubSpot credentials Mailchimp account and Mailchimp credentials How it works Cron node triggers this workflow every day at 7:00. HubSpot node searches for the new contacts created. Mailchimp node creates a new contact in a certain audience and add a 'subscribed' status.
by n8n Team
This workflow creates a new contact in Mautic when a new customer is created in Shopify. By default, the workflow will fill the first name, last name, and email address. You can add any other fields you require. Prerequisites Shopify account and Shopify credentials. Mautic account and Mautic credentials. How it works Triggers on a new customer in Shopify. Sends the required data to Mautic to create a new contact.
by n8n Team
This workflow turns a light red when an update is made to a GitHub repository. By default, updates include pull requests, issues, pushes just to name a few. Prerequisites GitHub credentials. Home Assistant credentials. How it works Triggers off on the On any update in repository node. Uses Home Assistant to turn on a light and then configure the light to turn red.
by n8n Team
This workflow provides a simple example of how to use itemMatching(itemIndex: Number) in the Code node to retrieve linked items from earlier in the workflow.