by Yang
Who is this for? This workflow is perfect for content marketers, bloggers, SEO professionals, and virtual assistants who need to transform keyword research into complete blog posts without spending hours writing and formatting. What problem is this workflow solving? Writing a blog post from scratch requires research, summarizing content, and structuring it into a polished article. This workflow automates that process by taking a single keyword, fetching related news articles, cleaning the data, and generating a professional blog draft automatically in Google Docs. What this workflow does The workflow begins when a keyword is submitted through a form. It expands the keyword into trending suggestions using Dumpling AI Autocomplete, then fetches recent news articles with Dumpling AI Google News. Articles are filtered to only include those published within the last 1–2 days, then scraped and cleaned for quality text. The aggregated content is sent to OpenAI, which generates a polished blog draft with a clear title. Finally, the draft is saved directly into Google Docs for easy editing and publishing. Nodes Overview Form Trigger – Form Submission (Keywords) Starts the workflow when a keyword is submitted through a form. HTTP Request – Dumpling AI Autocomplete Expands the keyword into multiple trending search suggestions. Split Out – Split Autocomplete Suggestions Breaks the list of autocomplete suggestions into individual items for processing. Loop – Loop Suggestions Iterates through each suggestion to process articles separately. Wait – Delay Between Requests Adds a pause to avoid sending too many requests at once. HTTP Request – Dumpling AI Google News Fetches recent news articles for each suggestion. Split Out – Split News Articles Splits the returned news results into individual articles. Code – Filter Articles (1–2 Days Old) Keeps only articles that are between 1 and 2 days old for fresh content. Limit – Limit Articles Restricts the workflow to the top 2 articles for each suggestion. HTTP Request – Dumpling AI Scraper Scrapes and cleans the full text content from the article URLs. Code – Clean & Prepare Article Content Removes clutter like links, images, and unrelated sections to ensure clean input. Aggregate – Aggregate Articles Combines the cleaned article content into one dataset. OpenAI – Generate Blog Draft Uses OpenAI to create a polished blog post draft and title in Markdown format. Google Docs – Create Blog File Creates a new Google Doc with the generated blog title. Google Docs – Insert Blog Content Inserts the full blog draft into the created document. 📝 Notes Set up Dumpling AI and generate your API key from: Dumpling AI OpenAI must be connected with an active API key for blog generation. Google Docs must be connected with write permissions to create and update blog posts. You can adjust the article filter (currently set to 1–2 days old) in the code node depending on your needs.
by Baris Cem Ant
Workflow Objective This n8n workflow automates the entire content creation process by monitoring specified RSS feeds for new articles. It then leverages Google Gemini AI to generate comprehensive, SEO-optimized blog posts inspired by these articles, creates unique cover images, and distributes the final content as a JSON file to stakeholders via Telegram. The primary goal is to automate the end-to-end content pipeline, saving significant time and ensuring a consistent output of high-quality content. Step-by-Step Breakdown Monitor News Sources (RSS Triggers): The workflow is triggered periodically (e.g., hourly, weekly) by multiple RSS Feed nodes that monitor sources like "Search Engine Journal" and "Tech Crunch" for new publications. Prevent Duplicate Content (Deduplication): For each new article fetched from the RSS feeds, the workflow checks an AWS DynamoDB database to see if the article's URL has been processed before. If the link already exists in the database, the process for that item is halted, and a debug notification is sent to Telegram via the "Telegram Debugger" node. This prevents the generation of duplicate content. AI-Powered Content Generation (Gemini Content Generation): If the article is new, its link is passed to a Google Gemini node. Using a highly detailed and structured prompt, Gemini generates a complete blog post in a specific JSON format. This output includes a title, meta description, SEO-friendly slug, a descriptive prompt for generating a cover image, and the full markdown body of the article (including an introduction, subheadings, conclusion, FAQ section, etc.). Data Cleaning and Parsing (JSON Parser): The raw text response from the AI is processed by a "Code" node. This custom script cleans the output—removing markdown code blocks, fixing potential syntax errors—and reliably parses it into a valid JSON object, ensuring the data is clean for subsequent steps. Image Generation and Cloud Storage: The image_generation_prompt from the parsed JSON is sent to another Google Gemini node configured for image generation, creating a 1200x630 cover image for the blog post. The newly created image is renamed using the slug. Finally, the image is uploaded to a cloud storage service like Cloudflare R2. If the upload fails, an error message is sent to Telegram. Final Data Assembly and Distribution: The generated text content is merged with the URL of the uploaded image to create the final, complete blog post data object. This entire data structure is converted into a JSON file, named using the format [slug].json. In the final step, this JSON file is sent as a document to designated recipients User via the Telegram nodes. Technologies and Services Used Trigger:** RSS Feed Reader Artificial Intelligence:** Google Gemini (for both text and image generation) Database:** AWS DynamoDB (for content deduplication) Cloud Storage:** Cloudflare R2 (S3-compatible) Notification & Distribution:** Telegram Data Processing:** n8n's native nodes (Merge, If, Set, Code)
by Lakindu Siriwardana
📚 Chat with Internal Documents (RAG AI Agent) ✅ Features Answers should given only within provided text. Chat interface powered by LLM (Ollama) Retrieval-Augmented Generation (RAG) using Supabase Vector DB Multi-format file support (PDF, Excel, Google Docs, text files) Automated file ingestion from Google Drive Real-time document update handling Embedding generation via Ollama for semantic search Memory-enabled agent using PostgreSQL Custom tools for document lookup with context-aware chat ⚙️ How It Works 📥 Document Ingestion & Vectorization Watches a Google Drive folder for new or updated files. Deletes old vector entries for the file. Uses conditional logic to extract content from PDFs, Excel, Docs, or text Summarizes and preprocesses content. (if needed) Splits and embeds the text via Ollama. Stores embeddings in Supabase Vector DB 💬 RAG Chat Agent Chat is initiated via Webhook or built-in chat interface. User input is passed to the RAG Agent. Agent queries the User_documents tool (Supabase vector store) using the Ollama model to fetch relevant content. If context is found, it answers directly. Otherwise, it can call tools or request clarification. Responses are returned to the user, with memory stored in PostgreSQL for continuity. 🛠 Supabase Database Configuration Create a Supabase project at https://supabase.com and go to the SQL editor. Create a documents table with the following schema: id - int8 content - text metadata - jsonb embedding - vector Generate an API Key
by Basil Irfan
Streamline restaurant reservations on WhatsApp Overview This n8n template automates table bookings via WhatsApp, letting users request, confirm, and manage reservations without manual intervention. It leverages AI to parse messages, apply group discounts, check availability, and send natural confirmations—all within a single, reusable workflow. Key Features AI‑powered parsing & responses**: Extracts guest name, date, time, and party size from free‑form WhatsApp messages and generates friendly confirmations.. Availability lookup**: Integrates with Google Sheets, Airtable, or MySQL to verify slot availability in real time. Automated reminders**: Optionally schedules follow‑up messages 24 hours before the booking. Modular design**: Swap triggers, storage, or messaging nodes to fit your infrastructure. How It Works Trigger: Incoming WhatsApp message via WhatsApp Business Cloud API. Parse & Validate: AI Function node extracts intent and guest details. Calculate Discount: Custom Function node computes group discount. Compose Confirmation: Open Ai text model generates a personalized response. Send Message:Request node posts back to WhatsApp. Optional Reminder: Wait node + HTTP Request for pre‑booking follow‑up. Requirements WhatsApp Business Cloud API access n8n Cloud or self‑hosted instance Reservation datastore (Google Sheets, Airtable, MySQL) Open ai key for AI text generation Customization Tips Menu Attachments**: Add media nodes to send PDFs or images. Alternate Slot Suggestions**: Use AI to propose new times if a slot is full. Upsell Offers**: Follow up with add‑on suggestions (e.g., wine pairings). Localization**: Extend prompts for multilingual support.
by Max Mitcham
An intelligent automation workflow that processes website demo requests, qualifies leads using AI-powered analysis, and automatically nurtures prospects through personalized follow-up sequences to maximize conversion rates. Overview This workflow transforms raw website leads into qualified prospects through intelligent filtering, enrichment, and personalized nurturing. It combines AI-powered qualification with automated follow-up to ensure high-quality leads receive immediate attention while nurturing those needing additional touchpoints. 🔄 Workflow Process 1. Entry Point - Webhook Website form submission capture Receives demo requests from website forms in real-time Captures lead data including LinkedIn URL, email, use case, and referral source 2. Initial Routing Filter Source-based lead classification Filters out low-quality leads from "lead_capture_box" sources Routes qualified submissions to enrichment process 3. Lead Enrichment Comprehensive data enhancement Enriches LinkedIn profile data via Trigify API Gathers additional company and professional intelligence 4. AI Qualification Engine Intelligent prospect evaluation Uses Claude AI to assess lead quality across multiple criteria: B2B company validation Geographic filtering (US, UK, Europe, Australia) Senior-level job titles or strategic keywords Current employment verification 5. Booking Verification Check Conversion status validation Checks Cal.com API to verify demo scheduling Routes booked leads to completion, non-booked to nurturing 6. AI-Powered Follow-up Research Personalized nurturing preparation Researches prospect's company using AI and web search Generates personalized follow-up messaging based on use case and company context 7. Email Campaign Integration Automated nurturing execution Adds qualified, non-booking leads to Instantly.ai email campaigns Includes personalized research for tailored outreach 🛠️ Technology Stack n8n**: Workflow orchestration Trigify API**: Lead enrichment Claude AI**: Qualification and personalized research Clay**: CRM integration Cal.com API**: Booking verification Instantly.ai**: Email campaign automation ✨ Key Features Real-time lead processing and AI-powered qualification Geographic and demographic filtering for market focus Automated booking verification and conversion tracking Personalized follow-up research and content generation Multi-platform integration for seamless lead management 🎯 Ideal Use Cases Perfect for B2B companies with demo-driven sales processes: SaaS companies requiring product demonstrations B2B service providers needing qualified prospect identification Sales teams managing high-volume inbound lead qualification Organizations with international markets requiring geographic focus 📈 Business Impact Transform website visitors into qualified sales opportunities: Lead Quality Enhancement**: AI filtering ensures only qualified prospects reach sales Conversion Optimization**: Systematic follow-up increases demo booking rates Sales Efficiency**: Automated qualification frees teams for high-value activities Personalized Engagement**: Research-driven follow-up increases response rates 💡 Strategic Advantage This workflow creates a sophisticated qualification funnel that combines automation with personalization. By using AI-powered assessment and research-driven follow-up, it ensures qualified prospects receive appropriate attention while preventing resource waste on unqualified leads. The system maximizes the value extracted from every website visitor by focusing sales efforts on highest-probability opportunities while automatically nurturing prospects who need additional touchpoints to convert.
by Abdullah Alshiekh
🧩 What Problem Does It Solve? Manually reviewing CVs from Telegram job applicants is slow, error-prone, and often inconsistent. This workflow automates the collection, analysis, and storage of CVs — saving HR teams hours while ensuring structured, high-quality candidate data for fast decision-making. 📝 Description This workflow is built to help HR teams collect and qualify CVs sent over Telegram. It verifies that a candidate submits a valid PDF, stores the file securely, extracts key information using AI, and logs everything neatly in Google Sheets. 🎯 Key Advantages for HR Teams ✅ Automatically filters out non-PDF and invalid messages ✅ Uses AI to extract clean, structured candidate data ✅ Links CV files to Google Sheets for easy HR access ✅ Eliminates manual data entry from physical CVs ✅ Provides a scalable CV pipeline via Telegram 🛠️ Features Telegram bot for CV collection MIME-type PDF validation Google Drive integration for secure storage Text extraction from PDFs Gemini AI-powered CV parsing Google Sheets integration for candidate logging Merge logic to synchronize multiple streams JSON-safe parsing for AI output Automatic job title and experience categorization Duplicate prevention through name-based matching 🔧 Requirements A Telegram bot token Google Drive API credentials Google Sheets API credentials Gemini API key (or another LLM) n8n instance with relevant credentials configured Candidates sending CVs in PDF format 🧠 Use Case Examples Recruitment Agencies: Automate pre-screening and reduce manual effort Small Startups: Collect high-quality CVs without paying for an ATS Internship Programs: Quickly categorize applicants by experience Remote Hiring: Accept global CVs via Telegram from mobile users Freelancer Portals: Auto-log contractor profiles from incoming resumes ⚙️ Configuration Tips 1-Set up Telegram Bot API credentials 2-Configure Google Drive API access 3-Configure Google Sheets API access 4-Configure Google Gemini/PaLM API access 5-Replace all placeholder IDs with your actual values If you need any help Get in Touch
by Alex Pekler
What this template does Automatically sends WhatsApp reminders for upcoming appointments from your Google Calendar using MoltFlow (https://molt.waiflow.app). Perfect for clinics, salons, consultants, and any appointment-based business. How it works Runs every hour and checks your Google Calendar for the next 24 hours Finds appointments with client phone numbers in the event description Extracts client name and appointment details Sends a personalized WhatsApp reminder asking for confirmation Logs the results for tracking Set up steps Create a MoltFlow account (https://molt.waiflow.app) and connect your WhatsApp number Generate an API key in MoltFlow (Sessions page, API Keys tab) Connect your Google Calendar OAuth2 credential Set your calendar ID (default: primary) Set YOUR_SESSION_ID in the Format Reminders code node Add your MoltFlow API key as an HTTP Header Auth credential (Header Name: X-API-Key) Add client phone numbers to calendar event descriptions in the format: Phone: 1234567890 Prerequisites MoltFlow account with an active WhatsApp session Google Calendar with client phone numbers in event descriptions Format: add "Phone: 1234567890" to the event description field
by Stephen Anindo
How it works Retrieves workflows directly from an n8n instance using the n8n API Dynamically generates a form to select which workflows to import Supports both fixed instance configuration and dynamic source/target selection Formats workflows safely and creates them in the target instance You stay in control at every step: nothing is imported unless you explicitly select it. Set up steps Estimated setup time: 2–5 minutes Simple mode:** Add API credentials for the source and target n8n instances and run the workflow. Dynamic mode (optional):** Connect a database (e.g. Notion or Supabase) containing your n8n instance URLs and API keys, then select the source and target instances via the form. No manual exports, no bulk imports, and no additional configuration required.
by ben daamer
How it works • Config node for channel, keyword, limit, and empty-results message • Fetches tenders from BOAMP API (public data, no API key) • Formats message with title, date, buyer, link - or sends friendly "no results" message • Posts to your Slack channel Set up steps • Add Slack credential - ~2 min • Edit Config node (channel, keyword, limit, EMPTY_MESSAGE) • Activate the workflow
by Thibaut
What this workflow does This workflow automatically exports your Binance spot portfolio positions to Google Sheets on a daily basis, enabling you to build comprehensive tracking dashboards and analyze your crypto holdings over time. Who is this for? Crypto investors wanting to track portfolio evolution Traders needing historical data for analysis Anyone wanting automated portfolio snapshots without manual exports How it works Schedule Trigger - Runs daily at a configurable time Binance API - Fetches current spot wallet balances Data Transformation - Filters only non-null assets Google Sheets - Clears sheet and appends current assets --- Setup Instructions Prerequisites Binance account with API access enabled Google account with Google Sheets n8n instance (cloud or self-hosted) Step 1: Binance API Configuration Go to https://www.binance.com/en/my/settings/api-management Create a new API key IMPORTANT: Enable ONLY "Read" permissions - DO NOT enable trading or withdrawals Whitelist your server IP (recommended for security) Save your API Key and Secret Key securely Step 2: Google Sheets Setup Create a new Google Sheet with headers matching your exported fields (e.g., Asset | Balance) In n8n, connect your Google account via OAuth2 Note your spreadsheet ID from the URL Step 3: Import & Configure Import this workflow into n8n Add your Binance credentials (API Key + Secret) Configure the Google Sheets node with your spreadsheet ID and sheet name Adjust the schedule trigger to your preferred time (default: daily) --- Customization Track historical positions: By default, this workflow clears the sheet before each export, giving you a real-time snapshot. To keep position history: Remove the "Clear Sheet" action Add a timestamp column to track when each export occurred Use Google Sheets or Looker Studio to visualize daily evolutions Add price data: Extend the workflow by adding an HTTP Request node to fetch current prices from Binance API (/api/v3/ticker/price) and calculate portfolio value.
by Hashir Bin Waseem
AI-powered Meeting Summaries and Action Items to Slack and ClickUp How it Works Webhook Trigger: The workflow starts when Fireflies notifies that a transcription has finished. Transcript Retrieval: The transcript is pulled from Fireflies based on the meeting ID. Pre-processing: The transcript is split into sentences and then aggregated into a raw text block. AI Summarization: The aggregated transcript is sent to Google Gemini, which generates a short summary and a structured list of action items. Post-processing: The AI response is cleaned and formatted into JSON. Action items are mapped to titles and descriptions. Distribution: The meeting summary is posted to Slack. Action items are created as tasks in ClickUp. Use Case This workflow is designed for teams that want to reduce the manual effort of writing meeting notes and extracting action items. Automatically generate a clear and concise meeting summary Share the summary instantly with your team on Slack Ensure action items are not lost by automatically creating tasks in ClickUp Ideal for distributed teams, project managers, and product teams managing recurring meetings Requirements n8n instance** set up and running Fireflies.ai account** with API access to meeting transcripts Google Gemini API (via PaLM credentials)** for AI-powered summarization Slack account** with OAuth2 credentials connected in n8n ClickUp account** with OAuth2 credentials connected in n8n
by David Olusola
🌤️ Weather Alerts via SMS (OpenWeather + Twilio) This workflow checks the current weather and forecast every 6 hours using the OpenWeather API, and automatically sends an SMS alert via Twilio if severe conditions are detected. It’s great for keeping teams, family, or field workers updated about extreme heat, storms, or snow. ⚙️ How It Works Check Every 6 Hours A Cron node triggers the workflow every 6 hours. Frequency can be adjusted based on your needs. Fetch Current Weather & Forecast Calls OpenWeather API for both current conditions and the 24-hour forecast. Retrieves temperature, precipitation, wind speed, and weather descriptions. Analyze Weather Data A Code node normalizes the weather data. Detects alert conditions such as: Extreme heat (≥95°F) Extreme cold (≤20°F) Severe storms (thunderstorm, tornado) Rain or snow High winds (≥25 mph) Also checks upcoming forecast for severe weather in the next 24 hours. Alert Needed? If no severe conditions → workflow stops. If alerts exist → proceed to SMS formatting. Format SMS Alert Prepares a compact, clear SMS message with: Current conditions Detected alerts Next 3 hours forecast preview Example: 🌤️ WEATHER ALERT - New York, US NOW: 98°F, clear sky 🚨 ALERTS (1): 🔥 EXTREME HEAT: 98°F (feels like 103°F) 📅 NEXT 3 HOURS: 1 PM: 99°F, sunny 2 PM: 100°F, sunny 3 PM: 100°F, partly cloudy Send Weather SMS Twilio sends the SMS to configured phone numbers. Supports multiple recipients. Log Alert Sent Logs the alert type, urgency, and timestamp. Useful for auditing and troubleshooting. 🛠️ Setup Steps 1. OpenWeather API Sign up at openweathermap.org. Get a free API key (1000 calls/day). Update the API key and location (city or lat/long) in the HTTP Request nodes. 2. Twilio Setup Sign up at twilio.com. Get your Account SID & Auth Token. Buy a Twilio phone number (≈ $1/month). Add Twilio credentials in n8n. 3. Recipients In the Send Weather SMS node, update phone numbers (format: +1234567890). You can add multiple recipients. 4. Customize Alert Conditions Default alerts: rain, snow, storms, extreme temps, high winds. Modify the Analyze Weather Data node to fine-tune conditions. 📲 Example SMS Output 🌤️ WEATHER ALERT - New York, US NOW: 35°F, light snow 🚨 ALERTS (2): ❄️ SNOW ALERT: light snow 💨 HIGH WINDS: 28 mph 📅 NEXT 3 HOURS: 10 AM: 34°F, snow 11 AM: 33°F, snow 12 PM: 32°F, overcast ⏰ Alert sent: 08/29/2025, 09:00 AM ⚡ With this workflow, you’ll always know when bad weather is on the way — keeping you, your team, or your customers safe and informed.