by Friedemann Schuetz
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Welcome to my Wikipedia Podcast Telegram Bot Workflow! This workflow creates an intelligent Telegram bot that transforms Wikipedia articles into engaging 5-minute podcast episodes using natural language queries and voice messages. What this workflow does This workflow processes incoming Telegram messages (text or voice, e.g. "Berlin") and generates professional podcast content about any Wikipedia topic (e.g. "Berlin", "Shakespeare", etc.). The AI agent researches the requested subject, creates a structured podcast script, and delivers it as high-quality audio directly through Telegram. Key Features: Voice message support (speech-to-text and text-to-speech) Wikipedia research integration for accurate content Professional podcast structure (intro, main content, outro) Natural-sounding AI voice synthesis Conversational and educational tone optimized for audio consumption This workflow has the following sequence: Telegram Trigger - Receives incoming messages (text or voice) from users via Telegram bot Text or Voice Switch - Routes the message based on input type (text message vs. voice message) Voice Message Processing (if voice input): Retrieval of voice file from Telegram Transcription of voice message to text using OpenAI Whisper Text Message Preparation (if text input) - Prepares the text message for the AI agent Wikipedia Podcast Agent - Core AI agent that: Researches the requested topic using Wikipedia tool Creates a professional 5-minute podcast script (600-750 words) Follows structured format: intro, main content, outro Uses conversational, accessible, and enthusiastic tone ElevenLabs Text to Speech - Converts the podcast script into natural-sounding audio using AI voice synthesis Send Voice Response - Delivers the generated podcast audio back to the user via Telegram Requirements: Telegram Bot API**: Documentation Create a bot via @BotFather on Telegram Get bot token and configure webhook Anthropic API** (Claude 4 Sonnet): Documentation Used for AI agent processing and podcast script generation Provides Wikipedia research capabilities OpenAI API**: Documentation Used for speech transcription (Whisper model) ElevenLabs API**: Documentation Used for high-quality text-to-speech generation Provides natural-sounding voice synthesis Important: The workflow uses the Wikipedia tool integrated with Claude 4 Sonnet to ensure accurate and comprehensive research. The AI agent is specifically prompted to create engaging, educational podcast content suitable for audio consumption. Configuration Notes: Update the Telegram chat ID in the trigger for your specific bot Modify the voice selection in ElevenLabs for different narrator styles The system prompt can be customized for different podcast formats or target audiences Supports both individual users and can be extended for group chats Feel free to contact me via LinkedIn, if you have any questions!
by Daniel Ng
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Restore n8n Credentials from Google Drive Backup This template enables you to restore your n8n credentials from a backup file in Google Drive. It's an essential companion to a credential backup workflow, ensuring you can recover your setup in case of data loss, instance migration, or disaster recovery. The workflow intelligently checks for existing credentials to prevent accidental overwrites of credentials with the same name that are already present. This workflow is manually triggered. We recommend you use this restore workflow in conjunction with a backup solution like our "Auto Backup Credentials to Google Drive" template. For more powerful n8n templates, visit our website or contact us at AI Automation Pro. We help your business build custom AI workflow automation and apps. Who is this for? This workflow is for n8n administrators and users who have backed up their n8n credentials to Google Drive (e.g., using a companion backup template) and need to restore them to the same or a different n8n instance. It's crucial for those managing self-hosted instances. What problem is this workflow solving? / use case If an n8n instance becomes corrupted, needs to be migrated, or if credentials are accidentally deleted, a manual re-creation of all credentials can be extremely time-consuming and error-prone. This workflow automates the restoration process from a known backup, saving significant time and ensuring accuracy. It's particularly useful for: Disaster recovery. Migrating n8n instances. Quickly setting up a new n8n instance with existing credentials. What this workflow does The workflow is manually triggered and performs the following operations: Fetch Current Credentials: An "On Click Trigger" starts the process. It executes the command npx n8n export:credentials --all --decrypted via the "Execute Command Get All Credentials" node to get a list of all credentials currently in your n8n instance. This list is then processed by "JSON Formatting Data" and "Aggregate Credentials" nodes to extract just the names of existing credentials for comparison. Download Backup File from Google Drive: The "Google Drive Get Credentials File" node searches your Google Drive for the n8n_backup_credentials.json file. The "Google Drive Download File" node then downloads the found file. Process Backup Data: The "Convert Files To JSON" (an Extract From File node) converts the downloaded file content, expected to be JSON, into a usable JSON object. "Split Out" nodes then process this data to handle individual credential entries from the backup file. Loop and Restore Credentials: The "Loop Over Items" (a SplitInBatches node) iterates through each credential from the backup file. Duplicate Check: For each credential, an "IF" node ("Check For Skipped Credentials") checks two conditions using an OR combinator: If the credential name from the backup ($('Loop Over Items').item.json.name) is empty. If a credential with the same name already exists in the current n8n instance (by checking against the list from the "Aggregate Credentials" node). Conditional Restore: If the credential name is NOT empty AND it does NOT already exist (i.e., the conditions in the IF node are false), the workflow proceeds to the "Restore N8n Credentials" node (an n8n API node). This node uses the name, type, and data for each new credential from the backup file to create it in the n8n instance. Credentials with empty names or those already present are skipped as they take the true path of the IF node, which loops back. A "Wait" node introduces a 1-second delay after each restoration attempt, to prevent API rate limiting before looping to the next item. Step-by-step setup n8n Instance Environment (for current credentials check): The n8n instance must have access to npx and n8n-cli for the "Execute Command Get All Credentials" node to function. Google Drive Credentials: Configure the "Google Drive Get Credentials File" and "Google Drive Download File" nodes with your Google OAuth2 credentials. n8n API Credentials: Configure the "Restore N8n Credentials" node with your n8n API credentials. This API key needs permissions to manage credentials. Backup File Name: The workflow is configured to search for a file named n8n_backup_credentials.json in the "Google Drive Get Credentials File" node. If your backup file has a different name or you want to specify a path, update the "Query String" parameter in this node. How to customize this workflow to your needs Backup File Location/Query:** Modify the "Google Drive Get Credentials File" node parameters if your backup file is in a specific folder, has a different naming convention, or if you want more specific query logic. Overwrite Logic:** The current workflow skips existing credentials by name. If you need to update/overwrite existing credentials, you would need to modify the logic in the "Check For Skipped Credentials" (IF) node and potentially use an "update" operation in the "n8n" API node if available for credentials (note: updates often require the credential ID, which might not be in the backup file). Notifications:** Add notification steps (e.g., Email, Slack) to report on the success or failure of the restoration process, and to list which credentials were restored or skipped. Selective Restore:** To restore only specific credentials, you could add a filter step after "Split Out1" or modify the IF condition in "Check For Skipped Credentials" to check for particular credential names or types from the backup file. Error Handling:** Implement more robust error handling for API errors (e.g., from the n8n API node or Google Drive nodes), file not found issues, or problems during command execution. Important Note on Credential Security Decrypted Backup File:** This workflow assumes the n8n_backup_credentials.json file contains decrypted credential data, typically created by a companion backup workflow. Execution Environment:** The "Execute Command Get All Credentials" node requires npx n8n-cli access on the server running n8n.
by Mauricio Perera
Overview This workflow exposes an HTTP endpoint (webhook) that accepts a JSON definition of an n8n workflow, validates it, and—if everything is correct—dynamically creates that workflow in the n8n instance via its internal API. If any validation fails or the API call encounters an error, an explanatory message with details is returned. Workflow Diagram Webhook │ ▼ Validate JSON ── fails validation ──► Validation Error │ └─ passes ─► Validation Successful? │ ├─ true ─► Create Workflow ──► API Successful? ──► Success Response │ │ │ └─ false ─► API Error └─ false ─► Validation Error Step-by-Step Details 1. Webhook Type**: Webhook (POST) Path**: /webhook/create-workflow Purpose**: Expose a URL to receive a JSON definition of a workflow. Expected Input**: JSON containing the main workflow fields (name, nodes, connections, settings). 2. Validate JSON Type**: Code Node (JavaScript) Validations Performed**: Ensure that payload exists and contains both name and nodes. Verify that nodes is an array with at least one item. Check that each node includes the required fields: id, name, type, position. If missing, initialize connections, settings, parameters, and typeVersion. Output if Error**: { "success": false, "message": "<error description>" } Output if Valid**: { "success": true, "apiWorkflow": { "name": payload.name, "nodes": payload.nodes, "connections": payload.connections, "settings": payload.settings } } 3. Validation Successful? Type**: IF Node Condition**: $json.success === true Branches**: true: proceed to Create Workflow false: route to Validation Error 4. Create Workflow Type**: HTTP Request (POST) URL**: http://127.0.0.1:5678/api/v1/workflows Authentication**: Header Auth with internal credentials Body**: The apiWorkflow object generated earlier Options**: continueOnFail: true (to handle failures in the next IF) 5. API Successful? Type**: IF Node Condition**: $response.statusCode <= 299 Branches**: true: proceed to Success Response false: route to API Error 6. Success Response Type**: SET Node Output**: { "success": "true", "message": "Workflow created successfully", "workflowId": "{{ $json.data[0].id }}", "workflowName": "{{ $json.data[0].name }}", "createdAt": "{{ $json.data[0].createdAt }}", "url": "http://localhost:5678/workflow/{{ $json.data[0].id }}" } 7. API Error Type**: SET Node Output**: { "success": "false", "message": "Error creating workflow", "error": "{{ JSON.stringify($json) }}", "statusCode": "{{ $response.statusCode }}" } 8. Validation Error Type**: SET Node Output**: { "success": false, "message": "{{ $json.message }}" } Example Webhook Request curl --location --request POST 'http://localhost:5678/webhook/create-workflow' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "My Dynamic Workflow", "nodes": [ { "id": "start-node", "name": "Start", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [100, 100], "parameters": {} }, { "id": "set-node", "name": "Set", "type": "n8n-nodes-base.set", "typeVersion": 1, "position": [300, 100], "parameters": { "values": { "string": [ { "name": "message", "value": "Hello from a webhook-created workflow!" } ] } } } ], "connections": { "Start": { "main": [ [ { "node": "Set", "type": "main", "index": 0 } ] ] } }, "settings": {} }' Expected Success Response { "success": "true", "message": "Workflow created successfully", "workflowId": "abcdef1234567890", "workflowName": "My Dynamic Workflow", "createdAt": "2025-05-31T12:34:56.789Z", "url": "http://localhost:5678/workflow/abcdef1234567890" } Validation Error Response { "success": false, "message": "The 'name' field is required in the workflow" } API Error Response { "success": "false", "message": "Error creating workflow", "error": "{ ...full API response details... }", "statusCode": 401 }
by Nikan Noorafkan
🧾 Template: Extract Ad Creatives from Google’s Ads Transparency Center This n8n workflow pulls ad creatives from Google's Ads Transparency Center using SerpApi, filtered by a specific domain and region. It extracts, filters, categorizes, and exports ads into neatly formatted CSV files for easy analysis. 👤 Who’s it for? Marketing Analysts** researching competitive PPC strategies Ad Intelligence Teams** monitoring creatives from specific brands Digital Marketers** gathering visual and copy trends Journalists & Watchdogs** reviewing ad activity transparency ✅ Features Fetch creatives** using SerpApi's google_ads_transparency_center engine Filter results** to include only ads with an exact match to your target domain Categorize** by ad format: text, image, or video Export CSVs**: Generates a downloadable file for each format under the /files/ directory 🛠 How to Use Edit the “Set Domain & Region” node domain: e.g. example.com region: SerpApi numeric region code → See codes Add your SerpApi API key In the “Get Ads Page 1” node’s credentials section. Run the workflow Click "Test workflow" to initiate the process. Download your results Navigate to /files/ to find: text_{domain}_ads.csv image_{domain}_ads.csv video_{domain}_ads.csv 📌 Notes Only the first page (up to 50 creatives) is fetched; pagination is not included. Sticky Notes inside the workflow nodes offer helpful internal annotations. CSV files include creative-level details: ad copy, images, video links, etc.
by Jimleuk
This n8n template showcases the new HTTP tool released in version 1.47.0. Overall, the tool helps simplify AI Agent workflows where custom sub-workflows were performing the same simple http requests. Comparisons 1. AI agent that can scrape webpages Remake of https://n8n.io/workflows/2006-ai-agent-that-can-scrape-webpages/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Response Formatting 2. Allow your AI to call an API to fetch data Remake of https://n8n.io/workflows/2094-allow-your-ai-to-call-an-api-to-fetch-data/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Manual Query Params Definitions Replaces Response Formatting
by Solomon
The Stripe API does not provide custom fields in invoice or charge data. So you have to get it from the Checkout Sessions endpoint. But that endpoint is not easy for begginners. It has dictionary parameters and pagination settings. This workflows solves that problem by having a preconfigured GET request that gets all the checkout sessions from the last 7 days. It then transforms the data to make it easier to work with and allows you to filter by the custom_fields you want to get. Want to generate Stripe invoices automatically? Open 👉 this workflow . Check out my other templates https://n8n.io/creators/solomon/
by Max Mitcham
Want to check out all my flows, follow me on: https://maxmitcham.substack.com/ https://www.linkedin.com/in/max-mitcham/ This automation flow is designed to generate comprehensive, research-backed lead magnet articles based on a user-submitted topic, conduct deep research across multiple sources, and automatically create a professional Google Doc ready for LinkedIn sharing. ⚙️ How It Works (Step-by-Step): 📝 Chat Input (Entry Point) A user submits a topic through the chat interface: Topic for lead magnet content Target audience (automatically detected) Company context (when relevant) 🔍 Query Builder Agent An AI agent refines the input by: Converting the topic into 5 targeted research queries Determining if topic relates to *company for specialized research Using structured output parsing for consistent results 📚 Research Leader Agent Conducts comprehensive research that: Uses Perplexity API for real-time web research Integrates *company knowledge base when relevant Creates detailed table of contents with research insights Identifies key trends, expert opinions, and case studies 📋 Project Planner Agent Structures the content by: Generating professional title and subtitle Creating 8-10 logical chapter outlines Developing detailed writing prompts for each section Ensuring step-by-step actionable guidance ✍️ Research Assistant Team Multiple AI agents write simultaneously: Each agent writes one chapter with proper citations Maintains consistent voice across all sections Includes real-world examples and implementation steps Uses both web research and *company knowledge 📝 Editor Agent Professional content polishing: Refines tone for authenticity and engagement Adds image placeholders where appropriate Ensures proper flow between chapters Optimizes for LinkedIn lead magnet format 📄 Google Docs Creation Automated document generation: Creates new Google Doc with formatted content Sets proper sharing permissions (public link) Organizes in designated company folder Returns shareable URL for immediate use 🛠️ Tools Used: n8n: Workflow orchestration platform Anthropic Claude: Primary AI model for content generation OpenRouter: Backup AI model options Perplexity API: Real-time research capabilities *Company Knowledge Hub: Internal documentation access Google Docs API: Document creation and formatting Google Drive API: File management and sharing 📦 Key Features: End-to-end automation from topic to published document Multi-agent approach ensures comprehensive coverage Real-time research with proper citations Company-specific knowledge integration Professional editing and formatting Automatic Google Docs creation with sharing Scalable content generation (3-5 minutes per article) 🚀 Ideal Use Cases: B2B companies building thought leadership content Sales teams creating industry-specific lead magnets Marketing departments scaling content production Consultants developing expertise-demonstrating resources SaaS companies creating feature-focused educational content Startups establishing market presence without content teams `
by Don Jayamaha Jr
🧪 Binance SM 1hour Indicators Tool A precision trading signal engine that interprets 1-hour candlestick indicators for Binance Spot Market pairs using a GPT-4.1-mini LLM. Ideal for swing traders seeking directional bias and momentum clarity across medium timeframes. 🎥 Watch Tutorial: 🎯 Purpose This tool provides a structured 1-hour market read using: RSI** (Relative Strength Index) MACD** (Moving Average Convergence Divergence) BBANDS** (Bollinger Bands) SMA & EMA** (Simple and Exponential Moving Averages) ADX** (Average Directional Index) It’s invoked as a sub-agent in broader AI workflows, such as the Binance Financial Analyst Tool and the Spot Market Quant AI Agent. ⚙️ Key Features | Feature | Description | | ---------------------- | ------------------------------------------------------------- | | 🔄 Subworkflow Trigger | Runs only when called by parent agent (not standalone) | | 🧠 GPT-4.1-mini LLM | Translates numeric indicators into natural-language summaries | | 📊 Real-time Data | Pulls latest 40×1h candles via internal webhook from Binance | | 📥 Input Format | { "message": "ETHUSDT", "sessionId": "telegram_chat_id" } | | 📤 Output Format | JSON summary + Telegram-friendly HTML overview | 💡 Example Output 📊 1h Technical Overview – ETHUSDT • RSI: 59 (Neutral) • MACD: Bullish Crossover • BBANDS: Price at Upper Band • EMA > SMA → Positive Slope • ADX: 28 → Moderate Trend Strength 🧩 Use Cases | Scenario | Result | | -------------------------------------- | ----------------------------------------------- | | Mid-frame market alignment | Verifies momentum between 15m and 4h timeframes | | Quant AI Agent input | Supplies trend context for entry/exit decisions | | Standalone medium-term signal snapshot | Validates swing trade setups or filters noise | 📦 Installation Instructions Import workflow into your n8n instance Confirm internal webhook /1h-indicators is live and authorized Insert your OpenAI credentials for GPT-4.1-mini node Use only when triggered via: Binance Financial Analyst Tool Binance Spot Market Quant AI Agent 🧾 Licensing & Support 🔗 Don Jayamaha – LinkedIn linkedin.com/in/donjayamahajr © 2025 Treasurium Capital Limited Company Architecture, prompts, and signal logic are proprietary. Redistribution or commercial use requires explicit licensing. No unauthorized cloning permitted.
by Amjid Ali
AI Chatbot with Conditional Execution for Cost Efficiency Description This n8n workflow implements an AI-powered chatbot that only runs when a chat is initiated on a website. By introducing a conditional step, the workflow ensures that AI tokens are not consumed unnecessarily, making it a cost-efficient and resource-optimized solution. The chatbot, named Sophia, serves as an interactive assistant for SyncBricks. It helps users with guest posting services, YouTube review videos, IT consultancy, and online courses while collecting user details step by step. The chatbot ensures that inquiries are properly logged and confirmed before proceeding to AI-driven responses. This template is ideal for businesses, service providers, and content creators who want to optimize AI token usage while delivering personalized, interactive engagement with their users. Features Conditional Execution – The AI chatbot only activates when a chat is initiated, avoiding unnecessary API calls. AI-Powered Conversations – Uses Google Gemini AI to generate human-like responses. Step-by-Step Data Collection – Ensures structured user input, requesting name, email, and request type sequentially. Memory Buffer for Context Awareness – Maintains conversation context using a window buffer memory system. Multiple Service Offerings – Supports inquiries related to: Guest Posting Services YouTube Review Videos Online Courses on Udemy IT Consultancy Services Automated Confirmation Messages – After collecting user details, sends a confirmation message summarizing the request. How It Works Chat Message Trigger The workflow starts only when a chat message is received from the website. This ensures no AI token is consumed unless a user initiates a chat. Condition Check: Is Chat Input Provided? The workflow checks if chat input is non-empty. If the chat input is empty, the workflow stops, ensuring no unnecessary API usage. If a message is detected, the chatbot continues processing. AI-Powered Chat Response The chatbot, Sophia, generates personalized responses using Google Gemini AI. AI ensures structured conversation flow by collecting: User’s Full Name Email ID Request Type Memory Buffer for Context Retention A Window Buffer Memory system stores chat history and retrieves previous responses to ensure context-aware conversations. Response Optimization Checks memory to avoid asking the same question twice. If details are already provided, Sophia moves directly to processing the request. Confirmation & User Engagement After collecting the required details, Sophia summarizes the request as follows: "Got it [Name], your request is [Request Type]. I will be sending the details to your email ID: [Email]. Hold on while I send confirmation." Final Confirmation Message Ensures the user receives a proper acknowledgment of their inquiry. Prerequisites Before using this workflow, make sure you have: n8n Instance (Cloud or Self-Hosted) Google Gemini API Key (For AI-generated responses) Webhook Integration (To trigger the chatbot from your website) Use Cases Businesses & Enterprises – AI-powered lead qualification for services. Bloggers & Content Creators – Automated guest post inquiry handling. YouTube Influencers & Educators – AI chatbot to promote courses and review services. Marketing Agencies – Lead generation chatbot without excessive AI token consumption. E-Commerce & Consulting Services – AI-driven personalized customer engagement. Nodes Used in This Workflow Chat Trigger (Webhook) – Initiates only when a user sends a chat message. Conditional Check (If Node) – Ensures AI is only used when a chat is initiated. AI Agent (Google Gemini AI) – Generates intelligent chatbot responses. Memory Buffer (Context Retention) – Stores user inputs for context-aware conversations. Important Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox Creator Information Developed by: Amjid Ali Website: SyncBricks Email: amjid@amjidali.com LinkedIn: Amjid Ali YouTube: SyncBricks Support & Contributions If you find this workflow helpful, consider supporting my work: Donate via PayPal For full courses on n8n, visit: Course by Amjid Final Thoughts This n8n workflow ensures optimal AI token usage while engaging users with an intelligent chatbot. By integrating conditional execution, it prevents unnecessary API calls, making it cost-effective and efficient for businesses looking to automate chat-based customer interactions. Let me know if you need any modifications!
by Samir Saci
Tags*: Sustainability, Business Travel, Carbon Emissions, Flight Tracking, Carbon Interface API Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies monitor and reduce their environmental footprint by combining AI automation, carbon estimation APIs, and workflow automation. This workflow is part of our sustainability reporting initiative, allowing businesses to track the CO₂ emissions of employee flights. > Automate carbon tracking for your business travel with AI-powered workflows in n8n! 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for travel managers, sustainability teams, or finance teams who need to measure and report on emissions from business travel. Let’s imagine your company receives a flight confirmation email: The AI Agent reads the email and extracts structured data, such as flight dates, airport codes, and number of passengers. Then, the Carbon Interface API is called to estimate CO₂ emissions, which are stored in a Google Sheet for sustainability reporting. How does it work? This workflow automates the end-to-end process of tracking flight emissions from email to CO₂ estimation: 📨 Gmail Trigger captures booking confirmations 🧠 AI Agent extracts structured data (airports, dates, flight numbers) ✈️ Each flight leg is processed individually 🌍 Carbon Interface API returns distance and carbon emissions 📄 A second Google Sheet node appends the emission data for reporting Steps: 💌 Trigger on new flight confirmation email 🧠 Extract structured trip data using AI Agent (flights, airports, dates) 📑 Store flight metadata in Google Sheets 🧭 For each leg, call the Carbon Interface API 📥 Append distance, CO₂ in kg, and timestamp to the flight row What do I need to get started? You’ll need: A Gmail account receiving SAP Concur or travel confirmation emails A Google Sheet to record trip metadata and CO₂ emissions A free Carbon Interface API key Access to OpenAI for parsing the email via AI Agent A few sample flight confirmation emails to test Next Steps 🗒️ Use the sticky notes in the n8n canvas to: Add your Gmail and Carbon Interface credentials Send a sample booking email to your inbox Verify that emissions and distances are correctly added to your sheet This template was built using n8n v1.93.0 Submitted: June 7, 2025
by Samir Saci
Tags*: Ghost CMS, SEO Audit, Image Optimisation, Alt Text, Google Sheets, Automation Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies and content creators use automation and analytics to improve visibility, enhance performance, and reduce manual work. > Let’s use n8n to automate SEO audits to increase your traffic! 📬 For business inquiries, feel free to connect on LinkedIn Who is this template for? This workflow is perfect for bloggers, marketers, or content teams using Ghost CMS who want to: Extract and review all images from articles Detect missing or short alt texts Check image file size and filename SEO compliance Push the audit results into a Google Sheet How does it work? This n8n workflow extracts all blog posts from Ghost CMS, scans the HTML to collect all embedded images, then evaluates each image for: ✅ Presence and length of alt text 📏 File size in kilobytes 🔤 Filename SEO quality (e.g. lowercase, hyphenated, no special chars) All findings are written to Google Sheets for further analysis or manual cleanup. 🧭 Workflow Steps: 🚀 Trigger the workflow manually or on schedule 📰 Extract blog post content from Ghost CMS 🖼️ Parse all ` tags with src and alt` attributes 📤 Store image metadata in a Google Sheet (step 1) 🌐 Download each image using HTTP request 🧮 Extract file size, extension, and filename SEO flag 📄 Update the audit sheet with size and format insights What do I need to get started? This workflow requires: A Ghost Content API key A Google Sheet (to log audit results) No AI or external APIs required — works fully with built-in nodes Next Steps 🗒️ Follow the sticky notes inside the workflow to: Plug in your Ghost blog credentials Select or create a Google Sheet Run the audit and start improving your SEO! This template was built using n8n v1.93.0 Submitted: June 8, 2025
by explorium
Automatically enrich prospect data from HubSpot using Explorium and create leads in Salesforce This n8n workflow streamlines the process of enriching prospect information by automatically pulling data from HubSpot, processing it through Explorium's AI-powered tools, and creating new leads in Salesforce with enhanced prospect details. Credentials Required To use this workflow, set up the following credentials in your n8n environment: HubSpot Type**: App Token (or OAuth2 for broader compatibility) Used for**: triggering on new contacts, fetching contact data Explorium API Type**: Generic Header Auth Header**: Authorization Value**: Bearer YOUR_API_KEY Get explorium api key Salesforce Type**: OAuth2 or Username/Password Used for**: creating new lead records Go to Settings → Credentials, create these three credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: HubSpot Trigger This node listens for real-time events from the connected HubSpot account. Once triggered, the node passes metadata about the event to the next step in the flow. Node 2: HubSpot This node fetches contact details from HubSpot after the trigger event. Credential**: Connected using a HubSpot App Token Resource**: Contact Operation**: Get Contact Return All**: Disabled This node retrieves the full contact details needed for further processing and enrichment. Node 3: Match prospect This node sends each contact's data to Explorium's AI-powered prospect matching API in real time. Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/match Authentication**: Generic Header Auth (using a configured credential) Headers**: Content-Type: application/json The request body is dynamically built from contact data, typically including: full_name, company_name, email, phone_number, linkedin. These fields are matched against Explorium's intelligence graph to return enriched or validated profiles. Response Output: total_matches, matched_prospects, and a prospect_id. Each response is used downstream to enrich, validate, or create lead information. Node 4: Filter This node filters the output from the Match prospect step to ensure that only valid, matched results continue in the flow. Only records that contain at least one matched prospect with a non-null prospect_id are passed forward. Status: Currently deactivated (as shown by the "Deactivate" label) Node 5: Extract Prospect IDs from Matched Results This node extracts all valid prospect_id values from previously matched prospects and compiles them into a flat array. It loops over all matched items, extracts each prospect_id from the matched_prospects array and returns a single object with an array of all prospect_ids. Node 6: Explorium Enrich Contacts Information This node performs bulk enrichment of contacts by querying Explorium with a list of matched prospect_ids. Node Configuration: Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/contacts_information/bulk_enrich Authentication**: Header Auth (using saved credentials) Headers**: "Content-Type": "application/json", "Accept": "application/json" Returns enriched contact information, such as: emails**: professional/personal email addresses phone_numbers**: mobile and work numbers professions_email, **professional_email_status, mobile_phone Node 7: Explorium Enrich Profiles This additional enrichment node provides supplementary contact data enhancement, running in parallel with the primary enrichment process. Node 8: Merge This node combines multiple data streams from the parallel enrichment processes into a single output, allowing you to consolidate data from different Explorium enrichment endpoints. The "combine" setting indicates it will merge the incoming data streams rather than overwriting them. Node 9: Code - flatten This custom code node processes and transforms the merged enrichment data before creating the Salesforce lead. It can be used to: Flatten nested data structures Format data according to Salesforce field requirements Apply business logic or data validation Map Explorium fields to Salesforce lead properties Handle data type conversions Node 10: Salesforce This final node creates new leads in Salesforce using the enriched data returned by Explorium. Credential**: Salesforce OAuth2 or Username/Password Resource**: Lead Operation**: Create Lead The node creates new lead records with enriched information including contact details, company information, and professional data obtained through the Explorium enrichment process. Workflow Flow Summary Trigger: HubSpot webhook triggers on new/updated contacts Fetch: Retrieve contact details from HubSpot Match: Find prospect matches using Explorium Filter: Keep only successfully matched prospects (currently deactivated) Extract: Compile prospect IDs for bulk enrichment Enrich: Parallel enrichment of contact information through multiple Explorium endpoints Merge: Combine enrichment results Transform: Flatten and prepare data for Salesforce (Code node) Create: Create new lead records in Salesforce This workflow ensures comprehensive data enrichment while maintaining data quality and providing a seamless integration between HubSpot prospect data and Salesforce lead creation. The parallel enrichment structure maximizes data collection efficiency before creating high-quality leads in your CRM system.