by Mutasem
Use case n8n workflows can go out of hand when you're automating as much as we do at n8n. We needed a place to document them and keep track of who owns and maintains them. To facilitate this we use this n8n workflow to automatically sync workflows directly to a Notion database if it has the tag sync-to-notion. How to setup Add your n8n api creds Add your Notion creds Create notion database with fields env id as text, isActive (dev) as boolean, URL (dev) as url, Workflow created at as date, Workflow updated at as date, Error workflow setup as boolean (Make sure page is connected) Add tag sync-to-notion to some workflows
by Viktor Klepikovskyi
Base64 Encode Multiple Binary Files with a Code Node This template demonstrates how to handle multiple binary files in n8n by using a Code node to convert them into a Base64 encoded string. It's particularly useful when an API requires file uploads in this format and the standard 'Extract From File' node is not sufficient for batch processing. The workflow starts by downloading a ZIP file, unzipping it to get multiple binary files, and then uses a Code node with custom JavaScript to encode each file individually. Instructions Download and import this template into your n8n instance. Run the workflow once to see how it downloads, unzips, and then encodes multiple files. Modify the 'HTTP Request' node to download your own binary file or a ZIP file containing multiple files. Update the 'Code' node if you need to adjust the output format or file paths. Use the output of the 'Code' node in a subsequent node, such as another 'HTTP Request' to send the Base64-encoded files to your desired API. A link to the full blog post is available here
by Thomas
🧠 Writes original, thought-provoking blog posts using AI 🕓 Runs every 12 hours automatically ✍️ Publishes directly to Ghost blog with title, tags, and SEO meta 🔧 Features Scheduled every 12 hours OpenAI generates a multi-part blog post with metadata Markdown-compatible output (no HTML) Automatically published to Ghost CMS using authenticated API (🔐 no hardcoded keys) Fully modular and general-purpose — edit prompt for any blog theme! ⚙️ Nodes Overview Step Node Type Purpose 1️⃣ Schedule Trigger Runs every 12 hours 2️⃣ OpenAI Generates blog post + meta info 3️⃣ Code Extracts content, title, meta, and tags 4️⃣ Code Formats content as Ghost mobiledoc payload 5️⃣ HTTP Request Publishes post to Ghost via Admin API 📝 OpenAI Prompt (Generalized) Write a high-quality blog post on a creative or thought-provoking topic. The tone should be engaging and immersive. Length: 2–4 paragraphs. Then add a brief paragraph offering an alternative perspective or logical counterpoint. Finally, generate: Blog post title Meta description 5 tags 🔐 Notes ✅ No hardcoded API keys 🛠️ Ghost Admin API credentials must be set using the Credential Manager 📌 Prompt and Ghost URL are both easily customizable
by Daniel Lianes
Automated Daily AI Summaries from WhatsApp Groups using a Custom AI Agent Transform your WhatsApp group conversations into actionable business intelligence through automated AI analysis and daily reporting. This workflow eliminates manual conversation monitoring by capturing messages in real-time, processing voice notes, and delivering structured insights directly to your team. Overview This workflow provides complete conversation intelligence automation from message capture to insight delivery. It eliminates manual monitoring, analysis, and reporting by using Evolution API integration, OpenAI transcription, and advanced LLM analysis for hands-free business intelligence that scales your team's awareness of important discussions. Core Function: Autonomous conversation analysis that transforms WhatsApp group chatter into structured business insights with zero manual intervention, maintaining consistent daily reporting while capturing emerging opportunities and trends before your competition. Key Capabilities Real-time message capture - Monitors multiple WhatsApp groups simultaneously with instant processing and smart filtering Voice message transcription - Automatic conversion of audio messages to searchable text using OpenAI Whisper AI-powered insight extraction - Advanced LLM analysis identifies trends, opportunities, and actionable information while filtering noise Automated daily reporting - Scheduled intelligence summaries delivered directly to your team via WhatsApp Multi-group organization - Separate tracking and analysis for different communities with unified reporting Smart content filtering - AI agent trained to focus on business-relevant discussions (AI, automation, tech trends, opportunities) Tools Used n8n: Workflow orchestration managing the entire intelligence pipeline from capture to delivery Evolution API: WhatsApp Business API integration for real-time message monitoring and sending OpenAI Whisper: Voice message transcription ensuring no important audio content is missed OpenRouter/GPT-4.1: Advanced AI analysis for intelligent insight extraction and content filtering Google Sheets: Organized message storage with timestamps and metadata for historical analysis Custom AI Agent: "WhatsOn" - specialized business intelligence detective for tech and automation insights How to Install Import the Workflow: Download the JSON file and import into your n8n instance Configure Evolution API: Set up WhatsApp integration and webhook endpoints for message capture API Credentials Setup: Add OpenAI, OpenRouter, and Google Sheets credentials in n8n Group Configuration: Update group IDs in the "Set Info" node with your monitored groups Google Sheets Setup: Create organized spreadsheet with separate tabs for each group Schedule Configuration: Set your preferred daily summary delivery time Test Execution: Run manual test to verify message capture and AI analysis work correctly Use Cases Business Intelligence Automation: Stay informed about industry discussions without manual monitoring Opportunity Detection: Identify emerging trends, tools, and business opportunities in real-time Team Knowledge Sharing: Automated distribution of relevant insights from multiple communities Competitive Intelligence: Monitor industry discussions to stay ahead of market developments Community Management: Track engagement patterns and important conversations across groups Voice Message Processing: Ensure audio-based insights aren't lost in team communications Setup Requirements Evolution API account: WhatsApp Business integration with webhook capabilities OpenAI API: Voice transcription access through Whisper API OpenRouter account: Access to GPT-4.1 for advanced conversation analysis Google Sheets: Message storage and organization with proper permissions configured WhatsApp Groups: Access to business or professional groups with relevant discussions Total setup time: 15-20 minutes once all API accounts are properly configured. How to Customize Analysis Focus: Modify the AI agent's system prompt to target your industry or specific topics. Adjust keyword priorities, conversation themes, or insight categories based on your business needs. Group Management: Add additional groups by extending the Switch node logic, creating new Google Sheets tabs, and updating group ID variables. Scale from 3 to unlimited group monitoring. Delivery Schedule: Change summary frequency from daily to weekly, multiple times per day, or custom schedules. Add multiple delivery destinations for different team segments. AI Intelligence: Customize the "WhatsOn" agent personality, adjust insight priorities, modify filtering criteria, or add sentiment analysis for deeper conversation understanding. Storage & Organization: Modify Google Sheets structure, add custom metadata fields, integrate with other databases, or connect to business intelligence dashboards for advanced analytics. Advanced Features Smart Voice Processing Automatically transcribes voice messages to text using OpenAI's Whisper API, ensuring critical audio-based discussions are captured and analyzed alongside text conversations. Intelligent Content Filtering The AI agent is specifically trained to identify valuable business insights while filtering out casual conversation, ensuring your daily summaries focus on actionable information that drives decisions. Multi-Fragment Delivery System Large intelligence summaries are automatically broken into properly formatted WhatsApp messages with natural pacing to avoid delivery issues and improve readability. Historical Analysis Capability All conversations are stored with full metadata in Google Sheets, enabling historical trend analysis, keyword tracking, and long-term pattern recognition for strategic planning. Ready to transform group conversations into competitive intelligence? This template converts casual WhatsApp discussions into structured business insights delivered automatically to your team, ensuring you never miss important industry developments or opportunities. Google Sheets Template The workflow includes a pre-configured structure for tracking: Message timestamps and sender information Full conversation content with voice transcriptions Group-specific organization and categorization Daily summary delivery logs and performance metrics Was this helpful? Let me know! I truly hope this WhatsApp intelligence system helps streamline your team's awareness of important conversations. Your feedback helps me create better automation resources for the n8n community. Ready to Build Something Great? If you're looking to take your n8n skills or business automation to the next level, I can help. 🎓 n8n Coaching: Want to become an n8n pro? I offer one-on-one coaching sessions to help you master workflows, tackle specific problems, and build with confidence. ➡️ Book a Coaching Session 💼 n8n Consulting: Have a complex project, an integration challenge, or need a custom workflow built for your business? Let's work together to create a powerful automation solution. ➡️ Inquire About Consulting Services Stay Updated on Automation For more content automation strategies, AI workflow tips, and business automation insights: Follow me on LinkedIn Happy Automating! Daniel Lianes
by scrapeless official
Brief Overview This automation template helps you track the latest real estate listings from the LoopNet platform. By using Scrapeless to scrape property listings, n8n to orchestrate the workflow, and Google Sheets to store the results, you can build a real estate data pipeline that runs automatically on a weekly schedule. How It Works Trigger on a Schedule:** The workflow runs automatically every week (can be adjusted to every 6 hours, daily, etc.). Scrape Property Listings:** Scrapeless crawls the LoopNet real estate website and returns structured Markdown data. Extract & Parse Content:** JavaScript nodes use regex to parse property titles, links, sizes, year built from Markdown. Flatten Data:** Each property listing becomes a single row with structured fields. Save to Google Sheets:** Property data is appended to your Google Sheet for easy analysis, sharing, and reporting. Features No-code, automated real estate listing scraper. Scrapes and structures the latest commercial property listings (for sale or lease). Saves structured listing data directly to Google Sheets. Fully automated, scheduled scraping—no manual scraping is required. Extensible: Add filters, deduplication, Slack/Email notifications, or multi-city scraping. Requirements Scrapeless API Key:** Sign up on the Scrapeless Dashboard. Go to Settings → API Key Management → Create API Key, then copy the generated key. n8n Instance:** Self-hosted or n8n.cloud account. Google Account:** For Google Sheets API access. Target Site:** This template is configured for LoopNet real estate listings but can be adapted for other property platforms like Crexi. Installation Deploy n8n on your preferred platform. Install the Scrapeless node from the community marketplace. Import this workflow JSON file into your n8n workspace. Create and add your Scrapeless API Key in n8n’s credential manager. Connect your Google Sheets account in n8n. Update the target LoopNet URL and Google Sheet details. Usage This automated real estate scraper is ideal for: | Industry / Role | Use Case | | ---------------------- | ----------------------------------------------------------------- | | Real Estate Agencies | Monitor new commercial properties and streamline lead generation. | | Market Research Teams | Track market dynamics and property availability in real-time. | | BI/Data Analysts | Automate data collection for dashboards and market insights. | | Investors | Keep tabs on the latest commercial property opportunities. | | Automation Enthusiasts | Example use case for learning web scraping + automation. | Output Example
by Yaron Been
Google Veo 3 Fast Video Generator Description A faster and cheaper version of Google’s Veo 3 video model, with audio Overview This n8n workflow integrates with the Replicate API to use the google/veo-3-fast model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Text prompt for video generation Optional Parameters seed** (integer, default: None): Random seed. Omit for random generations resolution** (string, default: 720p): Resolution of the generated video negative_prompt** (string, default: None): Description of what to discourage in the generated video How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: google/veo-3-fast API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters
by Shiva
This workflow enables users to submit food images to a Telegram bot, which uses OpenAI’s GPT-4 Vision to identify the item and estimate its caloric value. The results are stored in Google Sheets and sent back to the user. What it does: Triggers on a photo sent via Telegram. Acknowledges the user with a sticky note message. Downloads the image file securely using Telegram's API. Sends the image to GPT-4 Vision with a prompt: “Describe this food and estimate its calories.” Logs the GPT response to a Google Sheet (with timestamp). Replies to the user with the result (e.g., food name and estimated calories). Use cases: Personal food tracking Nutrition logging via chat Meal journaling for fitness or health Requirements: Telegram Bot Token (via credentials) OpenAI GPT-4 Vision access Google Sheets credential with access to the target sheet Notes: You can extend this template to calculate daily totals, categorize meals (breakfast/lunch/dinner), or even integrate with calorie goals. The sticky note node confirms receipt to enhance UX. Ideal for wellness apps, chat-based food journals, or AI-powered health bots.
by Davide
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This n8n workflow integrates the powerful Pipedream MCP server with AI capabilities to create a smart, extensible assistant that can interact with over 2,700 APIs and 10,000+ tools — all within a secure and modular structure. This setup seamlessly integrates Pipedream's MCP server with n8n, enabling your AI assistant to leverage thousands of APIs and tools securely. Benefits Massive Tool Access**: Instantly connect 2,700+ APIs using Pipedream MCP tools — from productivity apps to custom APIs — with zero-code integration. Dynamic AI Agent**: The use of a LangChain agent allows for flexible tool execution and contextual conversations, powered by GPT. Easy Customization**: Simply copy your MCP tool URL into the respective sseEndpoint field to extend the agent’s capabilities. Scalable and Modular**: Add or remove tools (like Slack, Notion, Stripe, etc.) without altering the core logic. Secure and Revocable**: Credentials and API access can be managed directly via Pipedream’s MCP dashboard. How It Works Chat Trigger: The workflow begins when a chat message is received via the When chat message received node, which acts as the entry point. AI Agent Processing: The message is passed to the AI Agent node, which orchestrates the interaction using the connected tools and memory. Language Model: The OpenAI Chat Model (GPT-4.1-mini) processes the user's input and generates responses or actions. Memory: The Simple Memory node retains context from the conversation to enable coherent multi-turn interactions. Tool Integration: The Calendly and Gmail nodes (connected via Pipedream's MCP server) allow the AI to perform actions like scheduling events or sending emails. These tools use SSE (Server-Sent Events) endpoints provided by Pipedream. Response: The AI Agent combines the model's output and tool responses to deliver a final reply to the user. Set Up Steps Sign Up for Pipedream: Create an account on and set up your MCP server. Configure MCP Tools: Connect your accounts (e.g., Calendly, Gmail) in Pipedream and obtain the SSE endpoints for each tool (e.g., https://mcp.pipedream.net/xxx/calendly_v2). Update n8n Nodes: Replace the placeholder SSE endpoints in the Calendly and Gmail nodes with your Pipedream MCP URLs. OpenAI Credentials: Ensure the OpenAI Chat Model node has valid API credentials (configured under "OpenAi account"). Activate Workflow: Enable the When chat message received node (currently disabled) and deploy the workflow. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Ange Russell
This workflow fetches real-time air quality and pollen data using Ambee’s APIs and sends a friendly, personalized daily summary by email. It uses a scheduler to automate data collection, AI-generated health tips, and clear, actionable messages—perfect for sensitive users (e.g. kids with asthma, allergy sufferers). Use Case: Ideal for individuals with respiratory conditions, allergies, or those who want to stay informed about environmental conditions affecting their health. Set up steps Estimated time: 10–15 minutes You'll need: Ambee API key (free registration) OpenAI API key Email credentials (Gmail) User Profile 💡 Keep in mind: You’ll need to input your location coordinates (we’ve pre-filled Braunschweig as an example). The AI Agent node uses a ready-made prompt that’s tailored for email—but feel free to adapt it to other messaging platforms.
by Yaron Been
Zsxkib Canary Qwen 2.5b Text Generator Description 🎤The best open-source speech-to-text model as of Jul 2025, transcribing audio with record 5.63% WER and enabling AI tasks like summarization directly from speech✨ Overview This n8n workflow integrates with the Replicate API to use the zsxkib/canary-qwen-2.5b model. This powerful AI model can generate high-quality text content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters audio** (string): Audio file to transcribe Optional Parameters llm_prompt** (string, default: None): Optional LLM analysis prompt show_confidence** (boolean, default: False): Show AI reasoning in analysis include_timestamps** (boolean, default: True): Include timestamps in transcript How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate text content Access the generated output from the final node API Reference Model: zsxkib/canary-qwen-2.5b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of text generation parameters
by Yaron Been
Wan Video Wan 2.2 I2v A14b Video Generator Description Image-to-video at 720p and 480p with Wan 2.2 A14B Overview This n8n workflow integrates with the Replicate API to use the wan-video/wan-2.2-i2v-a14b model. This powerful AI model can generate high-quality video content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for video generation image** (string): Input image to generate video from Optional Parameters seed** (integer, default: None): Random seed. Leave blank for random num_frames** (integer, default: 81): Number of video frames. 81 frames give the best results resolution** (string, default: 480p): Resolution of video. 832x480px corresponds to 16:9 aspect ratio, and 480x832px is 9:16 sample_shift** (number, default: 5): Sample shift factor sample_steps** (integer, default: 30): Number of generation steps. Fewer steps means faster generation, at the expensive of output quality. 30 steps is sufficient for most prompts frames_per_second** (integer, default: 16): Frames per second. Note that the pricing of this model is based on the video duration at 16 fps How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate video content Access the generated output from the final node API Reference Model: wan-video/wan-2.2-i2v-a14b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of video generation parameters
by Robert Breen
A step-by-step demo that shows how to pull your Outlook calendar events for the week and ask GPT-4o to write a short summary. Along the way you’ll practice basic data-transform nodes (Code, Filter, Aggregate) and see where to attach the required API credentials. 1️⃣ Manual Trigger — Run Workflow | Why | Lets you click “Execute” in the n8n editor so you can test each change. | | --- | --- | 2️⃣ Get Outlook Events — Get many events Node type: Microsoft Outlook → Event → Get All Fields selected: subject, start API setup (inside this node): Click Credentials ▸ Microsoft Outlook OAuth2 API If you haven’t connected before: Choose “Microsoft Outlook OAuth2 API” → “Create New”. Sign in and grant the Calendars.Read permission. Save the credential (e.g., “Microsoft Outlook account”). Output: A list of events with the raw ISO start time. > Teaching moment: Outlook returns a full dateTime string. We’ll normalize it next so it’s easy to filter. 3️⃣ Normalize Dates — Convert to Date Format // Code node contents return $input.all().map(item => { const startDateTime = new Date(item.json.start.dateTime); const formattedDate = startDateTime.toISOString().split('T')[0]; // YYYY-MM-DD return { json: { ...item.json, startDateFormatted: formattedDate } }; }); 4️⃣ Filter the Events Down to This Week After we’ve normalised the start date-time into a simple YYYY-MM-DD string, we drop in a Filter node. Add one rule for every day you want to keep—for example 2025-08-07 or 2025-08-08. Rows that match any of those dates will continue through the workflow; everything else is quietly discarded. Why we’re doing this: we only want to summarise tomorrow’s and the following day’s meetings, not the entire calendar. 5️⃣ Roll All Subjects Into a Single Item Next comes an Aggregate node. Tell it to aggregate the subject field and choose the option “Only aggregated fields.” The result is one clean item whose subject property is now a tidy list of every meeting title. It’s far easier (and cheaper) to pass one prompt to GPT than dozens of small ones. 6️⃣ Turn That List Into Plain Text Insert a small Code node right after the aggregation: return [{ json: { text: items .map(item => JSON.stringify(item.json)) .join('\n') } }]; Need a Hand? I’m always happy to chat automation, n8n, or Outlook API quirks. Robert Breen – Automation Consultant & n8n Instructor 📧 robert@ynteractive.com | LinkedIn