by Paul Taylor
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 📄 Post New Articles from Feeds to Slack Channel 🧠 What This Workflow Does This workflow automates the discovery and sharing of fresh articles from a curated list of RSS feeds. It performs the following steps: Reads a list of RSS feed URLs from a Google Sheet (Feeds tab). Fetches the latest articles from each feed. Checks for duplicates against previously published links stored in another sheet (Posted Articles tab). Filters out already shared articles. Posts the new articles to a designated Slack channel with formatted titles and links. Logs the newly shared articles back into the Google Sheet to prevent duplicates. 🛠️ Prerequisites To use this workflow, you must have: ✅ Google Sheets OAuth2 credentials set up in n8n (Used to access and update the RSS feed and post history sheets) ✅ Slack OAuth2 credentials (Used to post messages to a specific Slack channel) ✅ A Google Spreadsheet with: Feeds tab – Columns: title, link Posted Articles tab – Columns: title, link, pubDate 🔧 Environment Variables or Custom Values You will need to set the following n8n variable or replace with direct input: {{$vars.Daily_Industry_News_Automation_Google_Sheet}}: Reference to the Google Sheet Document ID (you can use a static ID if preferred) Also update: Slack channelId: Replace with your actual Slack channel ID if not dynamically referenced ⏰ Trigger & Scheduling Trigger type**: Cron node Default schedule: Every day at **7:00 AM You can modify this in the “Trigger Workflow” node to suit your own schedule. 🎯 Intended Use Case This workflow is ideal for: Marketing teams curating daily or weekly news digests Founders or industry professionals monitoring sector updates Automating internal Slack news updates Avoiding duplicate content when sourcing from multiple feeds
by AmirHossein MnasouriZade
📦 Send Telegram Notifications for New WooCommerce Orders This workflow automatically sends a Telegram notification when an order status in WooCommerce changes to "Processing." Perfect for online store owners who want instant updates on order fulfillment. ⚙️ Set Up Telegram Alerts for WooCommerce Orders Configure WooCommerce Webhook to trigger on order updates. Create a Telegram Bot and obtain the API token. Set Up Telegram Credentials in n8n. Configure the Telegram Node with your chat ID. Activate and Test the workflow by placing a new order. ##💡 Notes You can customize the message format in the 🖋️ Design Message Template node to include additional order details. Contact me on [Telegram]: https://t.me/amir676080 Message structure includes the following details 🆔 Order Number: 11234 👦🏻 Customer Name: John Doe 💵 Amount: 299.99 USD 📅 Order Date: ➖ 25th November 2024 at 14:42 🏙 City: New York 📞 Phone: +1 555-1234 ✍🏻 Order Note: Fast delivery requested 📦 Ordered Products: 🔹 Wireless Earbuds (2 items) 📝 Type: Premium Sound Edition Contact me on [Telegram]: https://t.me/amir676080
by Abbas Ali
This automation fetches the latest article from a WordPress blog, summarizes it using OpenAI, and sends the summary to a list of subscribers via email. Ideal for content creators and bloggers who want to distribute digestible content without manual effort. Use Case Perfect for: • Newsletter creators • Content marketers • Bloggers • Knowledge managers Nodes Used • Schedule Trigger • HTTP Request • Set • OpenAI • Google Sheets • Email (Gmail/SMTP) • IF • SplitInBatches Workflow Steps Trigger: Starts on a schedule (e.g., daily at 9:00 AM). Fetch Blog Post: Retrieves the most recent post from a WordPress blog via HTTP Request. Extract Fields: A Set node extracts the title, link, and content. Summarize Article: OpenAI processes the article and returns a 3-point summary. Fetch Subscribers: Google Sheets reads email addresses from a subscriber list. Loop Emails: SplitInBatches and Send Email nodes loop through subscribers. Conditional Logic: IF node skips articles shorter than 300 words. Credentials Required • OpenAI API Key (for content summarization) • Google Sheets OAuth2 (to read subscriber emails) • Gmail or SMTP (for sending emails) Test Instructions Replace blog URL in HTTP Request node. Connect OpenAI API key. Link your Google Sheet with a column named Email. Set up Gmail or SMTP credentials. Run manually for testing, then activate schedule.
by Jonathan | NEX
Stop manually checking suspicious links. This free n8n workflow provides the foundation for a powerful, automated URL analysis pipeline. Using the NixGuard AI engine, you can instantly analyze suspicious URLs from emails, logs, or tickets to uncover phishing attempts, malware hosting sites, and malicious redirects. What You Will Automate: 🤖 Instant Threat Triage: Get an immediate AI-powered summary of why a URL is malicious, saving you critical investigation time. 🎯 Actionable IOC Extraction: Automatically extract the final redirected URL, malicious domains, and IPs to fuel your threat hunting and blocking rules. 🚀 SOAR-Ready Foundation: This workflow is the perfect starting point for your security playbooks. Use the output to: Alert: Send instant notifications to Slack or Teams. Respond: Create tickets in Jira or TheHive. Block: Add malicious domains to your firewall or DNS filter. Download this free template and automate your first line of defense against web-based threats in minutes! Don't have the main workflow yet? Get it HERE! 🔗 Learn more about NixGuard: thenex.world 🔗 Get started with a free security subscription: thenex.world/security/subscribe For search: URL Scanning, Phishing, Threat Intelligence, SOAR, SOC Automation, NixGuard, Free, AI, Incident Response, Cybersecurity, Automation, Link Analysis, MTTR, Malware, VirusTotal
by David w/ SimpleGrow
Receive Webhook Notification The workflow starts when a webhook receives a POST request from Whapi, notifying that a new participant has joined a WhatsApp group. Filter the Event The workflow checks two conditions: The event is for the correct WhatsApp group (matching the specific group ID). The action type is "add" (meaning a user was added to the group). Send Welcome Message If both conditions are met, the workflow sends a personalized welcome message to the new participant via Whapi. The message explains the group rules and how the user can earn points and participate in weekly raffles. Create Airtable Record After sending the welcome message, the workflow creates a new record in the Airtable database for the new participant. The record includes: The participant’s WhatsApp ID An initial engagement count of 100 points The date of the last interaction (set to today) Result Every new group member is automatically welcomed and registered in your engagement database with starter points, ready to participate in your group’s activities and rewards. This workflow ensures new users are greeted, informed, and instantly included in your engagement tracking system.
by Friedemann Schuetz
Welcome to my Automated Image Metadata Tagging Workflow! DISCLAIMER: This workflow only works with self-hosted n8n instances! You have to install the n8n-nodes-exif-data Community Node! This workflow automatically analyzes the image content with the help of AI and writes it directly back into the image file as keywords. (https://n8n.io/workflows/2995).** This workflow has the following steps: Google Drive trigger (scan for new files added in a specific folder) Download the added image file Analyse the content of the image Merge Metadata and image file Write the Keywords into the Metadata (dc:subject/keywords) and create new image file Update the original file in the Google Drive folder The following accesses are required for the workflow: You have to install the n8n-nodes-exif-data Community Node** Google Drive: Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) You can contact me via LinkedIn, if you have any questions: https://www.linkedin.com/in/friedemann-schuetz
by Yaron Been
Fire Flux Image Generator Description The image generation model tailored for local development and personal use Overview This n8n workflow integrates with the Replicate API to use the fire/flux model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image Optional Parameters seed** (integer, default: 0): Random seed. Set for reproducible generation go_fast** (boolean, default: True): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 megapixels** (string, default: 1): Approximate number of megapixels for generated image num_outputs** (integer, default: 1): Number of outputs to generate aspect_ratio** (string, default: 2:1): Aspect ratio for the generated image output_format** (string, default: png): Format of the output images output_quality** (integer, default: 80): Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs num_inference_steps** (integer, default: 4): Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster. disable_safety_checker** (boolean, default: False): Disable safety checker for generated images. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: fire/flux API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Kyle Morse
Takes your raw, unpolished voice transcripts and transforms them into well-structured LinkedIn posts using AI. Perfect for when you have good ideas but they come out as rambling thoughts. The Problem: You record voice memos with great ideas, but when you read the transcript, it's full of "ums," incomplete sentences, and scattered thoughts. Turning that into a professional LinkedIn post takes forever. The Solution: Email your raw transcript to this workflow. It combines your unpolished content with examples from your inspiration document (posts you've saved that match your desired style), then uses AI to create a clean, engaging LinkedIn post. What actually happens: You email a raw voice transcript to your workflow email -The workflow pulls style examples from your Google Doc AI reformats your scattered thoughts into a coherent 150-300 word LinkedIn post You get an email back with the polished content + suggested image description Copy, paste, and post to LinkedIn You provide: The raw transcript (from your phone's voice recorder or any transcription tool), a Google Doc with LinkedIn posts you admire for style reference. You get: Professional LinkedIn content that sounds like you, but organized and polished. Technical requirements: Anthropic API, email account, Google Doc with example posts. This is basically having an AI writing assistant that knows your voice and preferred style, turning your brain dumps into professional content.
by Dataki
Workflow updated on 17/06/2024:** Added 'Summarize' node to avoid creating a row for each Notion content block in the Supabase table.* Store Notion's Pages as Vector Documents into Supabase This workflow assumes you have a Supabase project with a table that has a vector column. If you don't have it, follow the instructions here: Supabase Langchain Guide Workflow Description This workflow automates the process of storing Notion pages as vector documents in a Supabase database with a vector column. The steps are as follows: Notion Page Added Trigger: Monitors a specified Notion database for newly added pages. You can create a specific Notion database where you copy the pages you want to store in Supabase. Node: Page Added in Notion Database Retrieve Page Content: Fetches all block content from the newly added Notion page. Node: Get Blocks Content Filter Non-Text Content: Excludes blocks of type "image" and "video" to focus on textual content. Node: Filter - Exclude Media Content Summarize Content: Concatenates the Notion blocks content to create a single text for embedding. Node: Summarize - Concatenate Notion's blocks content Store in Supabase: Stores the processed documents and their embeddings into a Supabase table with a vector column. Node: Store Documents in Supabase Generate Embeddings: Utilizes OpenAI's API to generate embeddings for the textual content. Node: Generate Text Embeddings Create Metadata and Load Content: Loads the block content and creates associated metadata, such as page ID and block ID. Node: Load Block Content & Create Metadata Split Content into Chunks: Divides the text into smaller chunks for easier processing and embedding generation. Node: Token Splitter
by Belmont Digital
This n8n workflow verifies the deliverability of mailing addresses stored in Groundhogg CRM by integrating with Lob’s address verification service. Who is this for? This template is designed for Groundhogg CRM users who need to ensure the accuracy of mailing addresses stored in their CRM systems. What problem is this workflow solving? / Use Case This workflow addresses the challenge of maintaining accurate mailing addresses in CRM databases by verifying the deliverability of addresses. What this workflow does A new contact is created in Groundhogg CRM Webhook sent to n8n Verify if the address is deliverable via LOB Report back to Groundhogg CRM Set Up Steps Watch this setup video: https://www.youtube.com/watch?v=nrV0P0Yz8FI Takes 10-30 minutes to set up Accounts Needed: Groundhogg CRM LOB Account (https://www.lob.com $0.00/mo 300 US addresses Verifications) n8n Before using this template, ensure you have API keys for your Groundhogg CRM app and Lob. Set up authentication for both services within n8n. How to customize this workflow to your needs You can customize this workflow by adjusting the trigger settings to match Groundhogg CRM’s workflow configuration. Additionally, you can modify the actions taken based on the deliverability outcome, such as updating custom fields or sending notifications.
by Belmont Digital
This n8n workflow verifies the deliverability of mailing addresses stored in HighLevel by integrating with Lob's address verification service. Who is this for? This template is designed for HighLevel users who need to ensure the accuracy of mailing addresses stored in their CRM systems. What problem is this workflow solving? / Use Case This workflow addresses the challenge of maintaining accurate mailing addresses in CRM databases by verifying the deliverability of addresses. What this workflow does A new contact is created in HighLevel Webhook sent to n8n Verify if the address is deliverable via LOB Report back to HighLevel Set Up Steps Watch this setup video: https://www.youtube.com/watch?v=T7Baopubc-0 Takes 10-30 minutes to set up Accounts Needed: HighLevel LOB Account (https://www.lob.com $0.00/mo 300 US addresses Verifications) n8n Before using this template, ensure you have API keys for your HighLevel app and Lob. Set up authentication for both services within n8n. How to customize this workflow to your needs You can customize this workflow by adjusting the trigger settings to match HighLevel's workflow configuration. Additionally, you can modify the actions taken based on the deliverability outcome, such as updating custom fields or sending notifications.
by Yaron Been
Ibm Granite Granite Speech 3.3 8b Text Generator Description Granite-speech-3.3-8b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). Overview This n8n workflow integrates with the Replicate API to use the ibm-granite/granite-speech-3.3-8b model. This powerful AI model can generate high-quality text content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Optional Parameters seed** (integer, default: None): Random seed. Leave blank to randomize the seed. audio** (array, default: None): Audio inputs for the model. top_k** (integer, default: 50): The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering). top_p** (number, default: 0.9): A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751). prompt** (string, default: ): User prompt to send to the model. max_tokens** (integer, default: 512): The maximum number of tokens the model should generate as output. min_tokens** (integer, default: 0): The minimum number of tokens the model should generate as output. temperature** (number, default: 0.6): The value used to modulate the next token probabilities. chat_template** (string, default: None): A template to format the prompt with. If not provided, the default prompt template will be used. system_prompt** (string, default: None): System prompt to send to the model.The chat template provides a good default. How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate text content Access the generated output from the final node API Reference Model: ibm-granite/granite-speech-3.3-8b API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of text generation parameters