by Friedemann Schuetz
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Welcome to my Wikipedia Podcast Telegram Bot Workflow! This workflow creates an intelligent Telegram bot that transforms Wikipedia articles into engaging 5-minute podcast episodes using natural language queries and voice messages. What this workflow does This workflow processes incoming Telegram messages (text or voice, e.g. "Berlin") and generates professional podcast content about any Wikipedia topic (e.g. "Berlin", "Shakespeare", etc.). The AI agent researches the requested subject, creates a structured podcast script, and delivers it as high-quality audio directly through Telegram. Key Features: Voice message support (speech-to-text and text-to-speech) Wikipedia research integration for accurate content Professional podcast structure (intro, main content, outro) Natural-sounding AI voice synthesis Conversational and educational tone optimized for audio consumption This workflow has the following sequence: Telegram Trigger - Receives incoming messages (text or voice) from users via Telegram bot Text or Voice Switch - Routes the message based on input type (text message vs. voice message) Voice Message Processing (if voice input): Retrieval of voice file from Telegram Transcription of voice message to text using OpenAI Whisper Text Message Preparation (if text input) - Prepares the text message for the AI agent Wikipedia Podcast Agent - Core AI agent that: Researches the requested topic using Wikipedia tool Creates a professional 5-minute podcast script (600-750 words) Follows structured format: intro, main content, outro Uses conversational, accessible, and enthusiastic tone ElevenLabs Text to Speech - Converts the podcast script into natural-sounding audio using AI voice synthesis Send Voice Response - Delivers the generated podcast audio back to the user via Telegram Requirements: Telegram Bot API**: Documentation Create a bot via @BotFather on Telegram Get bot token and configure webhook Anthropic API** (Claude 4 Sonnet): Documentation Used for AI agent processing and podcast script generation Provides Wikipedia research capabilities OpenAI API**: Documentation Used for speech transcription (Whisper model) ElevenLabs API**: Documentation Used for high-quality text-to-speech generation Provides natural-sounding voice synthesis Important: The workflow uses the Wikipedia tool integrated with Claude 4 Sonnet to ensure accurate and comprehensive research. The AI agent is specifically prompted to create engaging, educational podcast content suitable for audio consumption. Configuration Notes: Update the Telegram chat ID in the trigger for your specific bot Modify the voice selection in ElevenLabs for different narrator styles The system prompt can be customized for different podcast formats or target audiences Supports both individual users and can be extended for group chats Feel free to contact me via LinkedIn, if you have any questions!
by Samir Saci
Tags*: Sustainability, Business Travel, Carbon Emissions, Flight Tracking, Carbon Interface API Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies monitor and reduce their environmental footprint by combining AI automation, carbon estimation APIs, and workflow automation. This workflow is part of our sustainability reporting initiative, allowing businesses to track the CO₂ emissions of employee flights. > Automate carbon tracking for your business travel with AI-powered workflows in n8n! 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for travel managers, sustainability teams, or finance teams who need to measure and report on emissions from business travel. Let’s imagine your company receives a flight confirmation email: The AI Agent reads the email and extracts structured data, such as flight dates, airport codes, and number of passengers. Then, the Carbon Interface API is called to estimate CO₂ emissions, which are stored in a Google Sheet for sustainability reporting. How does it work? This workflow automates the end-to-end process of tracking flight emissions from email to CO₂ estimation: 📨 Gmail Trigger captures booking confirmations 🧠 AI Agent extracts structured data (airports, dates, flight numbers) ✈️ Each flight leg is processed individually 🌍 Carbon Interface API returns distance and carbon emissions 📄 A second Google Sheet node appends the emission data for reporting Steps: 💌 Trigger on new flight confirmation email 🧠 Extract structured trip data using AI Agent (flights, airports, dates) 📑 Store flight metadata in Google Sheets 🧭 For each leg, call the Carbon Interface API 📥 Append distance, CO₂ in kg, and timestamp to the flight row What do I need to get started? You’ll need: A Gmail account receiving SAP Concur or travel confirmation emails A Google Sheet to record trip metadata and CO₂ emissions A free Carbon Interface API key Access to OpenAI for parsing the email via AI Agent A few sample flight confirmation emails to test Next Steps 🗒️ Use the sticky notes in the n8n canvas to: Add your Gmail and Carbon Interface credentials Send a sample booking email to your inbox Verify that emissions and distances are correctly added to your sheet This template was built using n8n v1.93.0 Submitted: June 7, 2025
by Samir Saci
Tags*: Ghost CMS, SEO Audit, Image Optimisation, Alt Text, Google Sheets, Automation Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies and content creators use automation and analytics to improve visibility, enhance performance, and reduce manual work. > Let’s use n8n to automate SEO audits to increase your traffic! 📬 For business inquiries, feel free to connect on LinkedIn Who is this template for? This workflow is perfect for bloggers, marketers, or content teams using Ghost CMS who want to: Extract and review all images from articles Detect missing or short alt texts Check image file size and filename SEO compliance Push the audit results into a Google Sheet How does it work? This n8n workflow extracts all blog posts from Ghost CMS, scans the HTML to collect all embedded images, then evaluates each image for: ✅ Presence and length of alt text 📏 File size in kilobytes 🔤 Filename SEO quality (e.g. lowercase, hyphenated, no special chars) All findings are written to Google Sheets for further analysis or manual cleanup. 🧭 Workflow Steps: 🚀 Trigger the workflow manually or on schedule 📰 Extract blog post content from Ghost CMS 🖼️ Parse all ` tags with src and alt` attributes 📤 Store image metadata in a Google Sheet (step 1) 🌐 Download each image using HTTP request 🧮 Extract file size, extension, and filename SEO flag 📄 Update the audit sheet with size and format insights What do I need to get started? This workflow requires: A Ghost Content API key A Google Sheet (to log audit results) No AI or external APIs required — works fully with built-in nodes Next Steps 🗒️ Follow the sticky notes inside the workflow to: Plug in your Ghost blog credentials Select or create a Google Sheet Run the audit and start improving your SEO! This template was built using n8n v1.93.0 Submitted: June 8, 2025
by explorium
Automatically enrich prospect data from HubSpot using Explorium and create leads in Salesforce This n8n workflow streamlines the process of enriching prospect information by automatically pulling data from HubSpot, processing it through Explorium's AI-powered tools, and creating new leads in Salesforce with enhanced prospect details. Credentials Required To use this workflow, set up the following credentials in your n8n environment: HubSpot Type**: App Token (or OAuth2 for broader compatibility) Used for**: triggering on new contacts, fetching contact data Explorium API Type**: Generic Header Auth Header**: Authorization Value**: Bearer YOUR_API_KEY Get explorium api key Salesforce Type**: OAuth2 or Username/Password Used for**: creating new lead records Go to Settings → Credentials, create these three credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: HubSpot Trigger This node listens for real-time events from the connected HubSpot account. Once triggered, the node passes metadata about the event to the next step in the flow. Node 2: HubSpot This node fetches contact details from HubSpot after the trigger event. Credential**: Connected using a HubSpot App Token Resource**: Contact Operation**: Get Contact Return All**: Disabled This node retrieves the full contact details needed for further processing and enrichment. Node 3: Match prospect This node sends each contact's data to Explorium's AI-powered prospect matching API in real time. Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/match Authentication**: Generic Header Auth (using a configured credential) Headers**: Content-Type: application/json The request body is dynamically built from contact data, typically including: full_name, company_name, email, phone_number, linkedin. These fields are matched against Explorium's intelligence graph to return enriched or validated profiles. Response Output: total_matches, matched_prospects, and a prospect_id. Each response is used downstream to enrich, validate, or create lead information. Node 4: Filter This node filters the output from the Match prospect step to ensure that only valid, matched results continue in the flow. Only records that contain at least one matched prospect with a non-null prospect_id are passed forward. Status: Currently deactivated (as shown by the "Deactivate" label) Node 5: Extract Prospect IDs from Matched Results This node extracts all valid prospect_id values from previously matched prospects and compiles them into a flat array. It loops over all matched items, extracts each prospect_id from the matched_prospects array and returns a single object with an array of all prospect_ids. Node 6: Explorium Enrich Contacts Information This node performs bulk enrichment of contacts by querying Explorium with a list of matched prospect_ids. Node Configuration: Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/contacts_information/bulk_enrich Authentication**: Header Auth (using saved credentials) Headers**: "Content-Type": "application/json", "Accept": "application/json" Returns enriched contact information, such as: emails**: professional/personal email addresses phone_numbers**: mobile and work numbers professions_email, **professional_email_status, mobile_phone Node 7: Explorium Enrich Profiles This additional enrichment node provides supplementary contact data enhancement, running in parallel with the primary enrichment process. Node 8: Merge This node combines multiple data streams from the parallel enrichment processes into a single output, allowing you to consolidate data from different Explorium enrichment endpoints. The "combine" setting indicates it will merge the incoming data streams rather than overwriting them. Node 9: Code - flatten This custom code node processes and transforms the merged enrichment data before creating the Salesforce lead. It can be used to: Flatten nested data structures Format data according to Salesforce field requirements Apply business logic or data validation Map Explorium fields to Salesforce lead properties Handle data type conversions Node 10: Salesforce This final node creates new leads in Salesforce using the enriched data returned by Explorium. Credential**: Salesforce OAuth2 or Username/Password Resource**: Lead Operation**: Create Lead The node creates new lead records with enriched information including contact details, company information, and professional data obtained through the Explorium enrichment process. Workflow Flow Summary Trigger: HubSpot webhook triggers on new/updated contacts Fetch: Retrieve contact details from HubSpot Match: Find prospect matches using Explorium Filter: Keep only successfully matched prospects (currently deactivated) Extract: Compile prospect IDs for bulk enrichment Enrich: Parallel enrichment of contact information through multiple Explorium endpoints Merge: Combine enrichment results Transform: Flatten and prepare data for Salesforce (Code node) Create: Create new lead records in Salesforce This workflow ensures comprehensive data enrichment while maintaining data quality and providing a seamless integration between HubSpot prospect data and Salesforce lead creation. The parallel enrichment structure maximizes data collection efficiency before creating high-quality leads in your CRM system.
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by WeblineIndia
This workflow is created by AI developers at WeblineIndia. It streamlines the process of managing content by automatically identifying and fetching the most recently added Google Doc file from your Google Drive. It extracts the content of the document for processing and leverages an AI model to generate a concise and meaningful summary of the extracted text. The summarized content is then stored in a designated Google Sheet, alongside relevant details like the document name and the date it was added, providing an organized and easily accessible reference for future use. This automation simplifies document handling, enhances productivity, and ensures seamless data management. Steps : Fetch the Most Recent Document from Google Drive Action:** Use the Google Drive Node. Details:** List files, filter by date to fetch the most recently added .doc file, and retrieve its file ID and metadata. Extract Content from the Document Action:** Use the Google Docs Node. Details:** Set the operation to "Get Content," pass the file ID, and extract the document's text content. Summarize the Document Using an AI Model Action:** Use an AI Model Node (e.g., OpenAI, ChatGPT). Details:** Provide the extracted text to the AI model, use a prompt to generate a summary, and capture the result. Store the Summarized Content in Google Sheets Action:** Use the Google Sheets Node. Details:** Append a new row to the target sheet with details such as the original document name, summary, and date added. About WeblineIndia WeblineIndia specializes in delivering innovative and custom AI solutions to simplify and automate business processes. If you need any help, please reach out to us.
by Miquel Colomer
📝 Overview This workflow transforms n8n into a smart real-estate concierge by combining an AI chat interface with Bright Data’s marketplace datasets. Users interact via chat to specify city, price, bedrooms, and bathrooms—and receive a curated list of three homes for sale, complete with images and briefings. 🎥 Workflow in Action Want to see this workflow in action? Play the video 🔑 Key Features AI-Powered Chat Trigger:** Instantly start conversations using LangChain’s Chat Trigger node. Contextual Memory:** Retain up to 30 recent messages for coherent back-and-forth. Bright Data Integration:** Dynamically filter “FOR\_SALE” properties by city, price, bedrooms, and bathrooms (limit = 3). Automated Snapshot Retrieval:** Poll for dataset readiness and fetch full snapshot content. HTML-Formatted Output:** Present results as a ` of ` items, embedding property images. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 Community nodes: install n8n-nodes-brightdata (the unverified community node) API credentials: OpenAI, Bright Data Webhook endpoint to receive chat messages Node Configuration: Chat Trigger: Listens for incoming chat messages; shows a welcome screen. Memory Buffer: Stores the last 30 messages for context. OpenAI Chat Model: Uses GPT-4o-mini to interpret user intent. Real Estate AI Agent: Orchestrates filtering logic, calls tools, and formats responses. Bright Data “Filter Dataset” Tool: Applies user-defined filters plus homeStatus = FOR_SALE. Wait & Recover Snapshot: Polls until snapshot is ready, then fetches content. Get Snapshot Content: Converts raw JSON into a structured list. Workflow Logic: User sends search criteria → Agent validates inputs. Agent invokes “Filter Dataset” once all filters are present. Upon dataset readiness, the snapshot is retrieved and parsed. Final output rendered as a bullet list with property images. Testing & Optimization: Use the built-in Execute Workflow trigger for rapid dry runs. Inspect node outputs in n8n’s UI; adjust filter defaults or snapshot limits. Tune OpenAI model parameters (e.g., maxIterations) for faster responses. Deployment & Monitoring: Activate the main workflow and expose its webhook URL. Monitor executions in the “Executions” panel; set up alerts for errors. Archive or duplicate workflows as needed; update credentials via credential manager. ✅ Pre-requisites Bright Data Account:** API key for marketplaceDataset. OpenAI Account:** Access to GPT-4o-mini model. n8n Version:** v1.0 or later with community node support. Permissions:** Webhook access, credential vault read/write. 👤 Who Is This For? Real-estate agencies and brokers seeking to automate client queries. PropTech startups building conversational search tools. Data analysts who want on-demand property snapshots without manual scraping. 📈 Benefits & Use Cases Time Savings:** Replace manual MLS searches with an AI-driven chat. Scalability:** Serve multiple clients simultaneously via webchat or embedded widget. Consistency:** Always report exactly three properties, ensuring concise results. Engagement:** Visual listings with images boost user satisfaction and conversion. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Matteo
This n8n workflow automates the handling of incoming emails. It detects and filters out spam, searches a knowledge base (FAQ) stored in a Pinecone vector database, and sends a reply using Gmail — all powered by an AI model (GPT-4o mini). How It Works Receiving Emails The Gmail Trigger node checks a Gmail inbox every hour. When a new email arrives, it starts the workflow. Fetching Full Email Content The get_message node retrieves all the details of the message: sender, subject, text, message ID, etc. Spam Filtering The Spam checker node uses GPT-4o mini to classify the email as either "spam" or "no spam". It detects not only classic spam but also automated messages (e.g. from Google or Microsoft). If marked as "spam", the workflow ends and nothing is processed. Conditional Filter The If node checks the spam result. Only "no spam" emails proceed to the AI Agent. AI-Based Reply The AI Agent node generates a response based on: The email content A system prompt defining the assistant’s behavior (polite, professional, under the name “Total AI Solutions”) Information retrieved from the Pinecone Vector Store, which contains FAQs The AI is instructed to always check the vector store before replying. The AI prepares both the subject and the body of the reply. Sending the Reply The Gmail node sends the reply to the original sender. It uses the original email's ID to keep the thread intact. Language Model The OpenAI Chat Model node provides GPT-4o mini as the language engine for generating responses. Memory Support The Simple Memory node maintains short-term context, helpful in multi-turn conversations. Knowledge Base (FAQ) The Pinecone Vector Store node connects to a Pinecone index (faqmattabott) containing vectorized FAQ content. Vectors are created using the Embeddings OpenAI node.
by Jacob
Unlock the full potential of your YouTube channel with our powerful integration that connects Google Sheets and DeepSeek AI — designed to skyrocket your video visibility and engagement without manual hassle. What this integration does for you: Automates video data management by pulling your YouTube URLs straight from Google Sheets — no more copy-pasting or manual tracking. Extracts your current titles and descriptions directly from YouTube, giving you a clear starting point. Generates 3 high-impact, SEO-optimized titles plus 1 compelling, conversion-focused description — crafted by DeepSeek’s AI to grab attention and rank higher. Updates your Google Sheet automatically with new optimized titles and descriptions — keeping all your video info in one place, ready to publish. Why it matters: In the crowded world of YouTube, having the right title and description can make all the difference between millions of views or being lost in the noise. This integration takes the guesswork out of optimization, saving you time and boosting your channel’s growth with proven AI-driven content. You need to Sheet with columns: Url Keyword Status Old Title New Title Old Description New Description My contact: jacobmarketingservice@gmail.com
by Gopal Debnath
💡 How It Works: ⏰ Triggers daily at 6:00 AM 📊 Fetches one random question from your Google Sheet 🧠 Formats question, options, correct answer, and explanation 📤 Sends it to: 📧 Email 💬 Telegram (via Bot) 📱 WhatsApp/SMS (via Twilio) 🔧 What You Need to Configure: YOUR_GOOGLE_SHEET_ID → your sheet with columns: question, optionA, optionB, optionC, optionD, correctAnswer, explanation Email credentials (SMTP) Telegram Bot Token & Chat ID Twilio phone numbers and credentials
by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 Market Research & Business Case Study Generator Category: AI + Research | GPT + Perplexity | Business Strategy Skill Level: Intermediate Use Case: Market Research, Business Planning, Strategic Analysis 📌 Description: This template automates the creation of comprehensive, data-backed business case studies—perfect for entrepreneurs, analysts, consultants, and market researchers. For more of such build + step-by-step video tutorials, check out: https://www.youtube.com/@Automatewithmarc Just send a simple message like: “Give me a market opportunity analysis of a bicycle rental business in North Africa.” And the workflow does the rest. It scopes your research topic, performs live web research, and crafts a well-structured 1500-word business case study—all automatically saved to Google Docs. 🔧 How It Works: 🟢 Chat Trigger: Start the workflow by sending a prompt via the built-in Chat interface (Langchain Chat Trigger). 🧭 Research Scope Definer (GPT-4o): Breaks down the user input into structured components like industry, geography, trends, and challenges. 🌐 Deep Research (Perplexity Sonar): Performs live research to retrieve relevant industry data, consumer trends, competitive insights, and more. 📘 Business Case Writer (Claude Sonnet): Synthesizes the findings into a detailed case study with sections including: Executive Summary Market Overview Opportunity Analysis Competitive Landscape Risks & Challenges Strategic Recommendations Conclusion 📄 Google Docs Integration: The final output is appended to a connected Google Doc, so all your insights are neatly stored and ready to share. 🧰 Tools Used: OpenAI GPT-4o Perplexity Sonar Deep Research Anthropic Claude Sonnet Google Docs Chat Trigger ✅ Ideal For: Business consultants & strategy teams Market researchers & analysts Startup founders & product managers Educators & MBA students
by Darsheel
This n8n workflow acts as an AI-powered Inbox Assistant that automatically summarizes and classifies Gmail emails, prioritizes important messages, and sends a daily digest to Slack. It’s ideal for startup founders and small teams juggling investor intros, customer leads, and support queries — all from a busy Gmail inbox. Each email is processed using ChatGPT to generate a concise summary, classify the message (e.g., Support, Investor, Spam), and determine its urgency. High and medium priority messages are forwarded to Slack instantly. Lower priority emails are logged to Google Sheets for review. A daily 7 PM digest summarizes the day’s most important messages. 💡 Use Cases Preventing missed investor or lead emails Lightweight CRM alternative using Google Sheets Slack summaries of critical Gmail activity 🔧 How It Works Gmail node fetches new messages ChatGPT summarizes and extracts urgency + type High/medium urgency → sent to Slack + labeled in Gmail Low urgency → logged in Google Sheets Cron node triggers a daily 7 PM Slack summary ✅ Requirements OpenAI API Key (GPT-4 or GPT-4o recommended) Gmail access with read and label permission Slack Bot Token or Webhook URL Google Sheets integration (optional) 🛠 Customization Ideas Replace Slack with Telegram or WhatsApp Route investor leads to Airtable or Notion Add multi-language support in ChatGPT prompt Create weekly summaries via email