by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.
by Lucas Peyrin
How it works This template is a hands-on tutorial for one of n8n's most powerful data tools: the Compare Datasets node. It's the perfect next step after learning basic logic, showing you how to build robust data synchronization workflows. We use a simple Warehouse Audit analogy to make the concept crystal clear: Warehouse A:* Our main, "source of truth" database. This is the master list of what our inventory *should be. Warehouse B:** A second, remote database (like a Notion page or Google Sheet) that we need to keep in sync. The Compare Datasets Node:* This is our *Auditor**. It takes both inventory lists and meticulously compares them to find any discrepancies. The Auditor then sorts every item into one of four categories, which correspond to the node's four outputs: In A only: New products found in our main warehouse that need to be added to Warehouse B. Same: Products that match perfectly in both warehouses. No action needed! Different: Products that exist in both places but have different details (e.g., stock count). These need to be updated in Warehouse B. In B only: Extra products found in Warehouse B that aren't in our master list. These need to be deleted. This pattern is the foundation for any two-way data sync you'll ever need to build. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup or credentials. Click "Execute Workflow" to start the audit. Explore the two Set nodes ("Warehouse A" and "Warehouse B") to see the initial data we are comparing. Click on "The Auditor" (Compare Datasets node) to see how it's configured to use product_id as the matching key. Follow the outputs to the four NoOp nodes to see which products were sorted into each category. Read the sticky notes next to each output—they explain exactly why each item ended up there.
by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works? This workflow is an intelligent SEO analysis pipeline that ethically scrapes blog content and performs comprehensive SEO evaluation using AI. It receives blog URLs via webhook, validates permissions through robots.txt compliance, extracts content, and generates detailed SEO insights across four strategic dimensions: Content Optimization, Keyword Strategy, Technical SEO, and Backlink Building potential. The system prioritizes ethical web scraping by checking robots.txt permissions before proceeding, ensuring compliance with website policies. Upon successful analysis, it returns a structured JSON report with actionable SEO recommendations, performance scores, and optimization strategies. Technical Specifications Trigger: HTTP POST webhook Processing Time: 30-60 seconds depending on content size AI Model: GPT-4.1 minimum with specialized SEO analysis prompt. Output Format: Structured JSON Error Handling: Graceful failure with informative messages Compliance: Respects website robots.txt policies
by Parag Javale
Social Media Auto-Poster (Google Sheets → Twitter & Instagram) This workflow automatically: Pulls rows marked as Pending from a Google Sheet. Generates a formatted Instagram caption and HTML preview. Converts the HTML into an image via HCTI.io. Posts the content: As a tweet (text only) to Twitter (X). As a post (image + caption) to Instagram via the Facebook Graph API. Marks the row in Google Sheets as Posted with a timestamp. It runs every 5 hours (configurable via the Schedule Trigger). Requirements Google Sheets API Credentials** connected in n8n. HCTI.io account** (HTML → Image API). Twitter (X) OAuth1 credentials**. Facebook/Instagram Graph API** access token (for the business account/page). A Google Sheet with at least these columns: RowID Caption Desc Hashtags Status Set Status to Pending for any row you want posted. Setup Import the JSON workflow (My_workflow.json) into your n8n instance. Link all credentials (replace placeholders with your own API keys and tokens). Update the Google Sheet ID and Sheet Name inside the Get row(s) in sheet and Update Status Posted nodes. (Optional) Adjust the posting interval in the Schedule Trigger node. How It Works Trigger: Runs every 5 hours. Fetch Rows: Reads Google Sheets for rows with Status = Pending. Caption Generation: Combines Desc + Hashtags into final_caption. HTML → Image: Converts caption to a styled 1080x1080 post. Social Posting: Posts the caption to Twitter (text only). Uploads the image + caption to Instagram. Update Status: Marks the row as Posted on [timestamp]. Notes Facebook/Instagram tokens expire; refresh or use long-lived tokens. HCTI.io may require a paid plan for high volumes. Works best with a business Instagram account linked to a Facebook Page. License This workflow can be reused and adapted freely under the MIT license.
by Rosh Ragel
This workflow processes emails received in Gmail and saves detailed information about each email to a MySQL database. Before using, you need to have: Gmail credentials MySQL database credentials A table in your database with the following columns: messageId (Gmail message ID) threadId snippet sender_name (nullable) sender_email recipient_name (nullable) recipient_email subject (nullable) How it works: The Gmail Trigger listens for new emails (checked every minute). A Code Node extracts the following fields from each email: Sender's name and email Recipient's name and email The MySQL Node inserts the extracted data into your database. If an entry with the same sender email already exists, it updates the record with the new details. How to use: Make sure your database table has all required columns listed above. Select the appropriate table and configure the matching column (e.g., id) to avoid duplicates. Customizing this Workflow: You can further modify the workflow to store attachments, timestamps, labels, or any other Gmail metadata as needed.
by Julian Kaiser
How it works Many users have asked in the support forum about different methods to analyze images and PDF documents with Google Gemini AI in n8n. This workflow answers that question by demonstrating five different approaches: Single image with auto binary passthrough - The simplest approach using AI Agent's automatic binary handling Multiple images with predefined prompts - For customized analysis with different instructions per image Native n8n item-by-item processing - For handling multiple items using n8n's standard workflow paradigm PDF analysis via direct API - For document analysis and text extraction Image analysis via direct API - For direct control over API parameters Each method has advantages depending on your specific use case, data volume, and customization needs. Set up steps Setup time: ~5-10 minutes You'll need: A Google Gemini API key n8n with HTTP Request and AI Agent nodes Important: For the HTTP Request nodes making direct API calls to Gemini (Methods 3, 4, and 5), you'll need to set up Query Authentication with your Gemini API key. Add a parameter named "key" with your API key value in the Query Auth section of these nodes. I'll updated this if I find better ways. Also let me know if you know other ways. Eager to learn :)
by Audun
A reusable and production-ready n8n workflow that secures public webhooks using Bearer Token authentication and dynamic request validation. ✨ What It Does Verifies Bearer Token** Compares the Authorization header with a configured secret token. Validates Required Fields** Checks that all expected fields are present in the incoming request body. Returns Standardized JSON Responses** 401 Unauthorized if token is missing or invalid 400 Bad Request if required fields are missing 200 OK with a custom success payload 👤 Who It’s For Developers exposing n8n workflows as APIs No-code/low-code builders integrating with external forms or tools Anyone needing simple authentication and validation on incoming webhooks 💡 Why Use It 🔒 Secure: Prevents unauthorized access to your public workflows 🧼 Clean: Centralized configuration for token and required fields ⚙️ Flexible: Easy to extend and customize for any use case 🛠 Setup Instructions Configure Values in the Configuration Node Set your secret token: config.bearerToken = YOUR_TOKEN Define required request fields by key: Example: config.requiredFields.message = true; config.requiredFields.email = true; ✅ Only the keys matter – values can be anything. Plug in Your Business Logic Replace the "Add workflow nodes here" with your own logic. Customize the Success Response Edit the Create Response node to shape your success payload. 🧪 Use Cases Securing public form submissions Creating internal API endpoints Validating data from external services 📌 Use this as a base for building secure, API-style workflows in n8n. 👋 Hello! I'm Audun / xqus If my n8n workflows saved you time or sparked ideas, consider sending a little support my way. It helps me keep building cool stuff — and maybe grab a coffee ☕ along the way!
by Jimleuk
This n8n template allows you to use AI to generate logos or images which mimic visual styles of other logos or images. The model used to generate the images is Google's Imagen 3.0. With this template, users will be able to automate design and marketing tasks such as creating variants of existing designs, remixing existing assets to validate different styles and explore a range of designs which would have been otherwise too expensive and time-consming previously. How it works A form trigger is used to capture the source image to reference styles from and a prompt for the target image to generate. The source image is passed to Gemini 2.0 to be analysed and its visual style and tone extracted as a detailed description. This visual style description is then combined with the user's initial target image prompt. This final prompt is given to Imagen 3.0 to generate the images. A quick webpage is put together with the generated images to present back to the user. If the user provided an email address, a copy of this HTML page will be sent. How to use Ensure the workflow is live to share the form publicly. The source image must be accessible to your n8n instance - either a public image of the internet or within your network. For best results, select a source image which has strong visual identity as these will allow the LLM to better describe it. For your prompt, refer to the imagen prompt guide found here: https://ai.google.dev/gemini-api/docs/image-generation#imagen-prompt-guide Requirements Gemini for LLM and Imagen model. Cloudinary for image CDN. Gmail for email sending. Customising this workflow Feel free to swap any of these out for tools and services you prefer. Want to fully automate? Switch the form trigger for a webhook trigger!
by David Ashby
Complete MCP server exposing all AWS Transcribe Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every AWS Transcribe Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n AWS Transcribe Tool tool with full error handling 📋 Available Operations (4 total) Every possible AWS Transcribe Tool operation is included: 🔧 Transcriptionjob (4 operations) • Create a transcription job • Delete a transcription job • Get a transcription job • Get many transcription jobs 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native AWS Transcribe Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every AWS Transcribe Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jean-Marie Rizkallah
🧩 Jamf Smart Group Membership to Slack Automatically export Jamf smart group membership to Slack in CSV format. Perfect for IT and security teams who need fast visibility into device grouping—without manually logging into Jamf. Slack automatically parses the CSV, making it viewable directly in the chat—no download required. ✅ Prerequisites • A Jamf Pro API key with permissions to read smart groups and computer details • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or connect it to a webhook • Fetch the list of smart group IDs (set manually in the workflow) • Loop over each group to get its members • Use a sub-workflow to fetch detailed info for each device • Convert the member list to CSV • Post the CSV file to a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your Jamf BaseURL and group IDs in the Set nodes • Add your Jamf Pro API credentials to the HTTP Request nodes • Provide your Slack webhook token or channel ID in the Slack node • Optional: Customize CSV fields or formatting as needed
by Lucas Peyrin
How it works Ever wonder how to make your workflows smarter? How to handle different types of data in different ways? This template is a hands-on tutorial that teaches you the three most fundamental nodes for controlling the flow of your automations: Merge, IF, and Switch. To make it easy to understand, we use a simple package sorting center analogy: Data Items** are packages on a conveyor belt. The Merge Node is where multiple conveyor belts combine into one. The IF Node is a simple sorting gate with two paths (e.g., "Fragile" or "Not Fragile"). The Switch Node is an advanced sorting machine that routes packages to many different destinations. This workflow takes you on a step-by-step journey through the sorting center: Creating Packages: Three different "packages" (two letters and one parcel) are created using Set nodes. Merging: The first Merge node combines all three packages onto a single conveyor belt so they can be processed together. Simple Sorting: An IF node checks if a package is fragile. If true, it's sent down one path; if false, it's sent down another. Re-Grouping: After being processed separately, another Merge node brings the packages back together. This "Split > Process > Merge" pattern is a critical concept in n8n! Advanced Sorting: A Switch node inspects each package's destination and routes it to the correct output (London, New York, Tokyo, or a Default bin). By the end, you'll see how all packages have been correctly sorted, and you'll have a solid understanding of how to build intelligent, branching logic in your own workflows. Set up steps Setup time: 0 minutes! This template is a self-contained tutorial and requires zero setup. There are no credentials or external services to configure. Simply click the "Execute Workflow" button. Follow the flow from left to right, clicking on each node to see its output and reading the detailed sticky notes to understand what's happening at each stage.
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.