by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works? This workflow is an intelligent SEO analysis pipeline that ethically scrapes blog content and performs comprehensive SEO evaluation using AI. It receives blog URLs via webhook, validates permissions through robots.txt compliance, extracts content, and generates detailed SEO insights across four strategic dimensions: Content Optimization, Keyword Strategy, Technical SEO, and Backlink Building potential. The system prioritizes ethical web scraping by checking robots.txt permissions before proceeding, ensuring compliance with website policies. Upon successful analysis, it returns a structured JSON report with actionable SEO recommendations, performance scores, and optimization strategies. Technical Specifications Trigger: HTTP POST webhook Processing Time: 30-60 seconds depending on content size AI Model: GPT-4.1 minimum with specialized SEO analysis prompt. Output Format: Structured JSON Error Handling: Graceful failure with informative messages Compliance: Respects website robots.txt policies
by Parag Javale
Social Media Auto-Poster (Google Sheets → Twitter & Instagram) This workflow automatically: Pulls rows marked as Pending from a Google Sheet. Generates a formatted Instagram caption and HTML preview. Converts the HTML into an image via HCTI.io. Posts the content: As a tweet (text only) to Twitter (X). As a post (image + caption) to Instagram via the Facebook Graph API. Marks the row in Google Sheets as Posted with a timestamp. It runs every 5 hours (configurable via the Schedule Trigger). Requirements Google Sheets API Credentials** connected in n8n. HCTI.io account** (HTML → Image API). Twitter (X) OAuth1 credentials**. Facebook/Instagram Graph API** access token (for the business account/page). A Google Sheet with at least these columns: RowID Caption Desc Hashtags Status Set Status to Pending for any row you want posted. Setup Import the JSON workflow (My_workflow.json) into your n8n instance. Link all credentials (replace placeholders with your own API keys and tokens). Update the Google Sheet ID and Sheet Name inside the Get row(s) in sheet and Update Status Posted nodes. (Optional) Adjust the posting interval in the Schedule Trigger node. How It Works Trigger: Runs every 5 hours. Fetch Rows: Reads Google Sheets for rows with Status = Pending. Caption Generation: Combines Desc + Hashtags into final_caption. HTML → Image: Converts caption to a styled 1080x1080 post. Social Posting: Posts the caption to Twitter (text only). Uploads the image + caption to Instagram. Update Status: Marks the row as Posted on [timestamp]. Notes Facebook/Instagram tokens expire; refresh or use long-lived tokens. HCTI.io may require a paid plan for high volumes. Works best with a business Instagram account linked to a Facebook Page. License This workflow can be reused and adapted freely under the MIT license.
by Rosh Ragel
This workflow processes emails received in Gmail and saves detailed information about each email to a MySQL database. Before using, you need to have: Gmail credentials MySQL database credentials A table in your database with the following columns: messageId (Gmail message ID) threadId snippet sender_name (nullable) sender_email recipient_name (nullable) recipient_email subject (nullable) How it works: The Gmail Trigger listens for new emails (checked every minute). A Code Node extracts the following fields from each email: Sender's name and email Recipient's name and email The MySQL Node inserts the extracted data into your database. If an entry with the same sender email already exists, it updates the record with the new details. How to use: Make sure your database table has all required columns listed above. Select the appropriate table and configure the matching column (e.g., id) to avoid duplicates. Customizing this Workflow: You can further modify the workflow to store attachments, timestamps, labels, or any other Gmail metadata as needed.
by Julian Kaiser
How it works Many users have asked in the support forum about different methods to analyze images and PDF documents with Google Gemini AI in n8n. This workflow answers that question by demonstrating five different approaches: Single image with auto binary passthrough - The simplest approach using AI Agent's automatic binary handling Multiple images with predefined prompts - For customized analysis with different instructions per image Native n8n item-by-item processing - For handling multiple items using n8n's standard workflow paradigm PDF analysis via direct API - For document analysis and text extraction Image analysis via direct API - For direct control over API parameters Each method has advantages depending on your specific use case, data volume, and customization needs. Set up steps Setup time: ~5-10 minutes You'll need: A Google Gemini API key n8n with HTTP Request and AI Agent nodes Important: For the HTTP Request nodes making direct API calls to Gemini (Methods 3, 4, and 5), you'll need to set up Query Authentication with your Gemini API key. Add a parameter named "key" with your API key value in the Query Auth section of these nodes. I'll updated this if I find better ways. Also let me know if you know other ways. Eager to learn :)
by Audun
A reusable and production-ready n8n workflow that secures public webhooks using Bearer Token authentication and dynamic request validation. ✨ What It Does Verifies Bearer Token** Compares the Authorization header with a configured secret token. Validates Required Fields** Checks that all expected fields are present in the incoming request body. Returns Standardized JSON Responses** 401 Unauthorized if token is missing or invalid 400 Bad Request if required fields are missing 200 OK with a custom success payload 👤 Who It’s For Developers exposing n8n workflows as APIs No-code/low-code builders integrating with external forms or tools Anyone needing simple authentication and validation on incoming webhooks 💡 Why Use It 🔒 Secure: Prevents unauthorized access to your public workflows 🧼 Clean: Centralized configuration for token and required fields ⚙️ Flexible: Easy to extend and customize for any use case 🛠 Setup Instructions Configure Values in the Configuration Node Set your secret token: config.bearerToken = YOUR_TOKEN Define required request fields by key: Example: config.requiredFields.message = true; config.requiredFields.email = true; ✅ Only the keys matter – values can be anything. Plug in Your Business Logic Replace the "Add workflow nodes here" with your own logic. Customize the Success Response Edit the Create Response node to shape your success payload. 🧪 Use Cases Securing public form submissions Creating internal API endpoints Validating data from external services 📌 Use this as a base for building secure, API-style workflows in n8n. 👋 Hello! I'm Audun / xqus If my n8n workflows saved you time or sparked ideas, consider sending a little support my way. It helps me keep building cool stuff — and maybe grab a coffee ☕ along the way!
by Jimleuk
This n8n template allows you to use AI to generate logos or images which mimic visual styles of other logos or images. The model used to generate the images is Google's Imagen 3.0. With this template, users will be able to automate design and marketing tasks such as creating variants of existing designs, remixing existing assets to validate different styles and explore a range of designs which would have been otherwise too expensive and time-consming previously. How it works A form trigger is used to capture the source image to reference styles from and a prompt for the target image to generate. The source image is passed to Gemini 2.0 to be analysed and its visual style and tone extracted as a detailed description. This visual style description is then combined with the user's initial target image prompt. This final prompt is given to Imagen 3.0 to generate the images. A quick webpage is put together with the generated images to present back to the user. If the user provided an email address, a copy of this HTML page will be sent. How to use Ensure the workflow is live to share the form publicly. The source image must be accessible to your n8n instance - either a public image of the internet or within your network. For best results, select a source image which has strong visual identity as these will allow the LLM to better describe it. For your prompt, refer to the imagen prompt guide found here: https://ai.google.dev/gemini-api/docs/image-generation#imagen-prompt-guide Requirements Gemini for LLM and Imagen model. Cloudinary for image CDN. Gmail for email sending. Customising this workflow Feel free to swap any of these out for tools and services you prefer. Want to fully automate? Switch the form trigger for a webhook trigger!
by Adam Janes
This workflow gives you the ability to reply to a long email with a voice note, rather than having to type everything out. ChatGPT will format your audio response and create an email draft for you. How it works When a new email arrives in your inbox, the workflow checks if it needs a response, and it it does, it sends a message to you on Telegram via a VoiceEmailer bot. When you reply to that message with an audio message, the second part of this workflow is triggered. It checks if the message is in the right format, transcribes the audio, and creates a draft response that shows up in the same email thread. Set up steps Add your credentials for Gmail and OpenAI Create an Telegram bot following the instructions here. Connect your telegram credentials so the workflow will use your bot. Turn on the workflow, and message the bot from your telegram. Find the Chat ID from the Executions tab of your workflow, and enter it in as a variable.
by Yatharth Chauhan
How it works This workflow automates the process of handling incoming emails by: Receiving emails via IMAP. Converting the email to Markdown for better AI understanding. Summarizing the email using an AI model. Drafting a professional reply with AI, based on the summary. Requesting human approval for the AI-generated response. Sending the approved reply back to the original sender. Set up steps Estimated time: 10–20 minutes (excluding credential setup) What you’ll need: IMAP credentials for your email inbox SMTP credentials for sending emails OpenAI (or compatible) API key for AI steps Setup outline: Add your IMAP and SMTP credentials to the workflow. Connect your OpenAI (or compatible) account for AI summarization and reply generation. Deploy the workflow in n8n and activate it. Test by sending an email to your connected inbox. Note: Detailed configuration tips and explanations are included as sticky notes inside the workflow for each step.
by David Ashby
Complete MCP server exposing all AWS Transcribe Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every AWS Transcribe Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n AWS Transcribe Tool tool with full error handling 📋 Available Operations (4 total) Every possible AWS Transcribe Tool operation is included: 🔧 Transcriptionjob (4 operations) • Create a transcription job • Delete a transcription job • Get a transcription job • Get many transcription jobs 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native AWS Transcribe Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every AWS Transcribe Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jean-Marie Rizkallah
🧩 Jamf Smart Group Membership to Slack Automatically export Jamf smart group membership to Slack in CSV format. Perfect for IT and security teams who need fast visibility into device grouping—without manually logging into Jamf. Slack automatically parses the CSV, making it viewable directly in the chat—no download required. ✅ Prerequisites • A Jamf Pro API key with permissions to read smart groups and computer details • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or connect it to a webhook • Fetch the list of smart group IDs (set manually in the workflow) • Loop over each group to get its members • Use a sub-workflow to fetch detailed info for each device • Convert the member list to CSV • Post the CSV file to a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your Jamf BaseURL and group IDs in the Set nodes • Add your Jamf Pro API credentials to the HTTP Request nodes • Provide your Slack webhook token or channel ID in the Slack node • Optional: Customize CSV fields or formatting as needed
by Lucas Peyrin
How it works Ever wonder how to make your workflows smarter? How to handle different types of data in different ways? This template is a hands-on tutorial that teaches you the three most fundamental nodes for controlling the flow of your automations: Merge, IF, and Switch. To make it easy to understand, we use a simple package sorting center analogy: Data Items** are packages on a conveyor belt. The Merge Node is where multiple conveyor belts combine into one. The IF Node is a simple sorting gate with two paths (e.g., "Fragile" or "Not Fragile"). The Switch Node is an advanced sorting machine that routes packages to many different destinations. This workflow takes you on a step-by-step journey through the sorting center: Creating Packages: Three different "packages" (two letters and one parcel) are created using Set nodes. Merging: The first Merge node combines all three packages onto a single conveyor belt so they can be processed together. Simple Sorting: An IF node checks if a package is fragile. If true, it's sent down one path; if false, it's sent down another. Re-Grouping: After being processed separately, another Merge node brings the packages back together. This "Split > Process > Merge" pattern is a critical concept in n8n! Advanced Sorting: A Switch node inspects each package's destination and routes it to the correct output (London, New York, Tokyo, or a Default bin). By the end, you'll see how all packages have been correctly sorted, and you'll have a solid understanding of how to build intelligent, branching logic in your own workflows. Set up steps Setup time: 0 minutes! This template is a self-contained tutorial and requires zero setup. There are no credentials or external services to configure. Simply click the "Execute Workflow" button. Follow the flow from left to right, clicking on each node to see its output and reading the detailed sticky notes to understand what's happening at each stage.
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.