by Influencers Club
How it works: Get multi social platform data for event attendees with their email and send personalized emails to onboard them as organic creators or ambassadors. Step by step workflow to enrich event attendees emails from Eventbrite with multi social (Instagram, Tiktok, Youtube, Twitter, Onlyfans, Twitch and more) profiles, analytics and metrics using the influencers.club API and sending personalized partnership outreach via SendGrid. Set up: Eventbrite (can be swapped for any event CRM, general CRM or Google Sheet) Influencers.club SendGrid (can be swapped for any marketing email sender eg. Mailchimp, drip or programmatic email sender like Mailgun)
by Influencers Club
How it works: Discover and filter creators with social platform data and find lookalikes to add to your CRM for influencer marketing platforms or creator outreach. Step by step workflow to discover new creators with multi social (Instagram, Tiktok, Youtube, Twitter, Onlyfans, Twitch and more) profiles, analytics and metrics using the influencers.club API, find similar ones and create/update records in Hubspot Set up: Hubspot (can be swapped for any CRM like Salesforce or Google Sheet) Influencers.club API
by Shashwat Singh
This workflow automatically detects duplicate files uploaded to a specific Google Drive folder by generating an MD5 hash of each file and comparing it against a Supabase database. If a duplicate is found, the file is moved to a dedicated Duplicates folder and a Slack notification is sent. All events, including unique uploads, duplicates, race conditions, and errors, are logged for audit purposes. It is designed for teams that handle high file volumes and need reliable, content based deduplication instead of simple filename checks. How it works Monitors a specific Google Drive folder for new files. Normalizes file metadata and downloads the binary content. Generates an MD5 hash from the file binary. Checks Supabase to see if the hash already exists. If duplicate, moves the file to a Duplicates folder and sends a Slack alert. If unique, stores the hash in Supabase. Logs every outcome, including errors and race conditions, in an audit table. Setup steps Connect your Google Drive account and select the folder to monitor. Connect your Supabase account and create the required tables: file_hashes dedup_audit_log Connect your Slack account and select a channel for duplicate alerts. Update the Duplicates folder ID in the Google Drive Move node. Setup typically takes 10 to 15 minutes if your Supabase project is ready.
by n8n Team
This workflow creates or updates a Mautic contact when a new event is scheduled in Calendly. The first name and email address are the only two fields that get updated. Prerequisites Calendly account and Calendly credentials. Mautic account and Mautic credentials. How it works Triggers on a new event in Calendly. Creates a new contact in Mautic if the email address does not exist in Mautic. Updates the contact's first name in Mautic if the email address exists in Mautic.
by vinci-king-01
How it works This workflow automatically scrapes commercial real estate listings from LoopNet and sends opportunity alerts to Telegram while logging data to Google Sheets. Key Steps Scheduled Trigger - Runs every 24 hours to collect fresh CRE market data AI-Powered Scraping - Uses ScrapeGraphAI to extract property information from LoopNet Market Analysis - Analyzes listings for opportunities and generates market insights Smart Notifications - Sends Telegram alerts only when investment opportunities are found Data Logging - Stores daily market metrics in Google Sheets for trend analysis Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping Set up Telegram connection - Connect your Telegram bot and specify the target channel Configure Google Sheets - Set up Google Sheets integration for data logging Customize the LoopNet URL - Update the URL to target specific CRE markets or property types Adjust schedule - Modify the trigger timing based on your market monitoring needs Keep detailed configuration notes in sticky notes inside your workflow
by n8n Team
This workflow creates a new contact in Mautic when a new customer is created in Shopify. By default, the workflow will fill the first name, last name, and email address. You can add any other fields you require. Prerequisites Shopify account and Shopify credentials. Mautic account and Mautic credentials. How it works Triggers on a new customer in Shopify. Sends the required data to Mautic to create a new contact.
by Cheng Siong Chin
Introduction Generates complete scientific papers from title and abstract using AI. Designed for researchers, automating literature search, content generation, and citation formatting. How It Works Extracts input, searches academic databases (CrossRef, Semantic Scholar, OpenAlex), merges sources, processes citations, generates AI sections (Introduction, Literature Review, Methodology, Results, Discussion, Conclusion), compiles document. Workflow Template Webhook → Extract Data → Search (CrossRef + Semantic Scholar + OpenAlex) → Merge Sources → Process References → Prepare Context → AI Generate (Introduction + Literature Review + Methodology + Results + Discussion + Conclusion via OpenAI) → Merge Sections → Compile Document Workflow Steps Input & Search: Webhook receives title/abstract; searches CrossRef, Semantic Scholar, OpenAlex; merges and processes references AI Generation: OpenAI generates six sections with in-text citations using retrieved references Assembly: Merges sections; compiles formatted document with reference list Setup Instructions Trigger & APIs: Configure webhook URL; add OpenAI API key; customize prompts Databases: Set up CrossRef, Semantic Scholar, OpenAlex API access; configure search parameters Prerequisites OpenAI API, CrossRef API, Semantic Scholar API, OpenAlex API, webhook platform, n8n instance Customization Adjust reference limits, modify prompts for research fields, add citation styles (APA/IEEE), integrate databases (PubMed, arXiv), customize outputs (DOCX/LaTeX/PDF) Benefits Automates paper drafting, comprehensive literature integration, proper citations
by Ian Kerins
Overview This n8n template automates the process of researching niche topics. It searches for a topic on Wikipedia, scrapes the relevant page using ScrapeOps, extracts the history or background section, and uses AI to generate a concise summary and timeline. The results are automatically saved to Google Sheets for easy content planning. Who is this for? Content Creators**: Quickly gather background info for videos or articles. Marketers**: Research niche markets and product histories. Educators/Students**: Generate timelines and summaries for study topics. Researchers**: Automate the initial data gathering phase. What problems it solves Time Consumption**: Manually reading and summarizing Wikipedia pages takes time. Blocking**: Scraping Wikipedia directly can sometimes lead to IP blocks; ScrapeOps handles this. Unstructured Data**: Raw HTML is hard to use; this workflow converts it into a clean, structured format (JSON/CSV). How it works Define Topic: You set a keyword in the workflow. Locate Page: The workflow queries the Wikipedia API to find the correct page URL. Smart Scraping: It uses the ScrapeOps Proxy API to fetch the page content reliably. Extraction: A code node intelligently parses the HTML to find "History", "Origins", or "Background" sections. AI Processing: GPT-4o-mini summarizes the text and extracts key dates for a timeline. Storage: The structured data is appended to a Google Sheet. Setup steps (~ 5-10 minutes) ScrapeOps Account: Register for a free API key at ScrapeOps. Configure the ScrapeOps Scraper node with your API key. OpenAI Account: Add your OpenAI credentials to the Message a model node. Google Sheets: Create a Google Sheet. You can duplicate this Template Sheet (copy the headers). Connect your Google account to the Append row in sheet node and select your new sheet. Pre-conditions An active ScrapeOps account. An OpenAI API key (or another LLM credential). A Google account for Sheets access. Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Wikipedia's Terms of Use, robots directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render/scroll/wait settings and parsers as needed. Use responsibly for legitimate business purposes.
by Kevin Meneses
What this workflow does This template extracts high-intent SEO keywords from any web page and turns them into a ranked keyword list you can use for content planning, landing pages, and SEO strategy. It runs in 3 phases: Scrape the target URL* with Decodo Decodo – Web Scraper for n8n Use AI to extract seed keywords* and understand the page topic Enrich each seed keyword with real Google SERP data via SerpApi* (related searches + questions + competitors), then apply a JavaScript scoring system to rank the best opportunities The final output is saved to Google Sheets as a clean table of ranked keywords. Who this workflow is for SEO consultants and agencies SaaS marketers and growth teams Founders validating positioning and messaging Content teams looking for “what people actually search for” This workflow is especially useful when you want keywords with commercial / solution intent, not generic single-word terms. Workflow overview Phase 1 — Scrape & clean page content Reads the URL from Google Sheets Scrapes the page via Decodo Cleans HTML into plain text (token-friendly) Phase 2 — AI keyword extraction AI returns a structured JSON with: brand / topic 5–10 mid-tail seed keywords intent + audience hints Phase 3 — SERP enrichment + scoring Uses SerpApi to fetch: related searches People Also Ask questions competitor domains Scores and ranks keywords based on: -- source type (related searches / PAA / organic) -- frequency across seeds -- modifiers (pricing, best, free, docs, etc.) -- mid-tail length preference Setup (step by step) 1) Google Sheets (input) Create a sheet with: Column name: urls One URL per row 2) Google Sheets (output) Create an output sheet with columns like: keyword score intent_hint source_type Tip: Clear the output sheet before each run if you want a clean export. 3) Decodo Add your Decodo credentials The URL is taken automatically from Google Sheets Decodo – Web Scraper for n8n 4) SerpApi Add your SerpApi key in the SerpApi node 5) AI Model Connect your preferred AI model (Gemini / OpenAI) The prompt is optimized to output valid JSON only Self-hosted disclaimer This is a community template. You must configure your own credentials (Google Sheets, Decodo, SerpApi, AI). Results depend on page accessibility and page content quality.
by Cheng Siong Chin
How It Works The workflow starts with a scheduled trigger that activates at set intervals. Behavioral data from multiple sources is parsed and sent to the MCDN routing engine, which intelligently assigns leads to the right teams based on predefined rules. AI-powered scoring evaluates each prospect’s potential, ensuring high-quality leads are prioritized. The results are synced to the CRM, and updates are reflected on an analytics dashboard for real-time visibility. Setup Steps Trigger: Define schedule frequency. Data Fetch: Configure APIs for all behavioral data sources. MCDN Router: Set routing rules, thresholds, and team assignments. AI Models: Connect OpenAI/NVIDIA APIs and configure scoring prompts. CRM Integration: Enter credentials for Salesforce, HubSpot, or other CRMs. Dashboard: Link to analytics tools like Tableau or Google Sheets for reporting. Prerequisites API credentials: NVIDIA AI, OpenAI, CRM platform; data sources; spreadsheet/analytics access Use Cases Lead prioritization for sales teams; customer segmentation; automated routing; Customization Adjust routing rules, add custom scoring models, modify team assignments, expand data sources, integrate additional AI providers Benefits Reduces manual lead routing 90%; improves scoring accuracy; accelerates sales cycle; enables data-driven team assignments;
by vinci-king-01
Certification Requirement Tracker with Rocket.Chat and GitLab ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Pre-conditions/Requirements Prerequisites An n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed and activated Rocket.Chat workspace with Incoming Webhook or user credentials GitLab account with at least one repository and a Personal Access Token (PAT) Access URLs for all certification bodies or industry associations you want to monitor Required Credentials ScrapeGraphAI API Key** – Enables web scraping services Rocket.Chat Credentials** – Either: Webhook URL, or Username & Password / Personal Access Token GitLab Personal Access Token** – To create issues and comments via API Specific Setup Requirements | Service | Requirement | Example/Notes | | ------------- | ---------------------------------------------- | ---------------------------------------------------- | | Rocket.Chat | Incoming Webhook URL OR user credentials | https://chat.example.com/hooks/abc123… | | GitLab | Personal Access Token with api scope | Generate at Settings → Access Tokens | | ScrapeGraphAI | Domain whitelist (if running behind firewall) | Allow outbound HTTPS traffic to target sites | | Cron Schedule | Annual (default) or custom interval | 0 0 1 1 * for 1-Jan every year | How it works This workflow automatically scrapes certification-issuing bodies once a year, detects any changes in certification or renewal requirements, creates a GitLab issue for the responsible team, and notifies the relevant channel in Rocket.Chat. It helps professionals and compliance teams stay ahead of changing industry requirements and never miss a renewal. Key Steps: Scheduled Trigger**: Fires annually (or any chosen interval) to start the check. Set Node – URL List**: Stores an array of certification-body URLs to scrape. Split in Batches**: Iterates over each URL for parallel scraping. ScrapeGraphAI**: Extracts requirement text, effective dates, and renewal info. Code Node – Diff Checker**: Compares the newly scraped data with last year’s GitLab issue (if any) to detect changes. IF Node – Requirements Changed?**: Routes the flow based on change detection. GitLab – Create/Update Issue**: Opens a new issue or comments on an existing one with details of the change. Rocket.Chat – Notify Channel**: Sends a message summarizing any changes and linking to the GitLab issue. Merge Node**: Collects all branch results for a final summary report. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, navigate to Settings → Community Nodes and install “ScrapeGraphAI”. Add Credentials: a. In Credentials, create “ScrapeGraphAI API”. b. Add your Rocket.Chat Webhook or PAT. c. Add your GitLab PAT with api scope. Import Workflow: Copy the JSON template into n8n (Workflows → Import). Configure URL List: Open the Set – URL List node and replace the sample array with real certification URLs. Adjust Cron Expression: Double-click the Schedule Trigger node and set your desired frequency. Customize Rocket.Chat Channel: In the Rocket.Chat – Notify node, set the channel or use an incoming webhook. Run Once for Testing: Execute the workflow manually to ensure issues and notifications are created as expected. Activate Workflow: Toggle Activate so the schedule starts running automatically. Node Descriptions Core Workflow Nodes: stickyNote – Workflow Notes**: Contains a high-level diagram and documentation inside the editor. Schedule Trigger** – Initiates the yearly check. Set (URL List)** – Holds certification body URLs and meta info. SplitInBatches** – Iterates through each URL in manageable chunks. ScrapeGraphAI** – Scrapes each certification page and returns structured JSON. Code (Diff Checker)** – Compares the current scrape with historical data. If – Requirements Changed?** – Switches path based on diff result. GitLab** – Creates or updates issues, attaches JSON diff, sets labels (certification, renewal). Rocket.Chat** – Posts a summary message with links to the GitLab issue(s). Merge** – Consolidates batch results for final logging. Set (Success)** – Formats a concise success payload. Data Flow: Schedule Trigger → Set (URL List) → SplitInBatches → ScrapeGraphAI → Code (Diff Checker) → If → GitLab / Rocket.Chat → Merge Customization Examples Add Additional Metadata to GitLab Issue // Inside the GitLab "Create Issue" node ↗️ { "title": Certification Update: ${$json.domain}, "description": What's Changed?\n${$json.diff}\n\n_Last checked: {{$now}}_, "labels": "certification,compliance," + $json.industry } Customize Rocket.Chat Message Formatting // Rocket.Chat node → JSON parameters { "text": :bell: Certification Update Detected\n>${$json.domain}\n>See the GitLab issue: ${$json.issueUrl} } Data Output Format The workflow outputs structured JSON data: { "domain": "example-cert-body.org", "scrapeDate": "2024-01-01T00:00:00Z", "oldRequirements": "Original text …", "newRequirements": "Updated text …", "diff": "- Continuous education hours increased from 20 to 24\n- Fee changed to $200", "issueUrl": "https://gitlab.com/org/compliance/-/issues/42", "notification": "sent" } Troubleshooting Common Issues No data returned from ScrapeGraphAI – Confirm the target site is publicly accessible and not blocking bots. Whitelist the domain or add proper headers via ScrapeGraphAI options. GitLab issue not created – Check that the PAT has api scope and the project ID is correct in the GitLab node. Rocket.Chat message fails – Verify webhook URL or credentials and ensure the channel exists. Performance Tips Limit the batch size in SplitInBatches to avoid API rate limits. Schedule the workflow during off-peak hours to minimize load. Pro Tips: Store last-year scrapes in a dedicated GitLab repository to create a complete change log history. Use n8n’s built-in Execution History Pruning to keep the database slim. Add an Error Trigger workflow to notify you if any step fails.
by Afigo Sam
🚀 Overview This workflow automatically discovers trending topics, generates engaging social media content using AI, and publishes posts to X (Twitter) and Facebook via Buffer. It combines trend monitoring + AI content generation + automated publishing into one seamless pipeline, helping creators and marketers stay consistent and relevant without manual effort. 🎯 Who is this for Content creators & social media managers Agencies managing multiple accounts Developers building AI-powered automation systems Anyone who wants to post consistently on trending topics ⚙️ What this workflow does Fetches trending topics (e.g., from X or external sources) Filters and formats relevant trends Uses Google Gemini to generate: X (Twitter) tweets Facebook posts Thread posts Formats content for each platform Publishes posts automatically via Buffer API ✨ Key Features Fully automated content pipeline Multi-platform posting (X + Facebook + Thread) AI-generated threads + captions Easily customizable prompts and filters Scalable for multiple niches or accounts 🔧 Setup Instructions Credentials required Google Gemini API key Apify API key Buffer API access token Configuration steps Add your API credentials to the respective nodes Customize the prompt templates for your niche Define your preferred trend source or keywords Set posting frequency (via Cron or trigger node) Optional customization Add moderation/approval step before posting Filter trends by region, hashtags, or keywords Adjust tone (professional, funny, viral, etc.) and language 🧠 How it works (high-level) The workflow monitors trending data, processes it into structured input, and feeds it into AI models to generate platform-specific content. The output is then formatted and sent to Buffer for scheduled or immediate publishing. ⚠️ Notes & Limitations AI-generated content may require review for accuracy Rate limits may apply depending on APIs used Ensure Buffer supports your connected accounts Trending data source reliability may vary