by Jean-Marie Rizkallah
🧩 Jamf Patch Summary to Slack Stay on top of software patch compliance by automatically posting Jamf patch summaries to Slack. This helps IT and security teams quickly identify outdated installs and take action—without logging into Jamf. ✅ Prerequisites • A Jamf Pro API key with permissions to read software titles and patch summary • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or Add a webhook • Fetch a list of software titles from Jamf Pro • Filter to select the software you're tracking (e.g. Chrome, Edge) • Retrieve the patch summary for that software (latest version, up-to-date, out-of-date counts) • Format the summary into Slack Block Kit • Post the formatted summary into a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your server BaseURL variable in the Set Node • Add your Jamf Pro API credentials in the HTTP Request nodes (Get & Retrieve) • Set the target software ID in the Filter node • Add your Slack webhook URL or token in the final HTTP node • Optional: Adjust Slack formatting inside the Function node
by Oneclick AI Squad
This n8n workflow automates the collection and processing of trip feedback data using Google Sheets as the backend. When new users are added to the system, they automatically receive feedback forms via email, and all responses are systematically processed and stored in Google Sheets for analysis and record-keeping. Good to know The delay buffer prevents system overload and ensures data integrity before sending notifications. All feedback data is automatically organized and maintained in Google Sheets for easy access and analysis. The workflow handles both new user onboarding and trip feedback submission seamlessly. How it works The Trigger - New User Entry node detects when a new user is added to the Google Sheets feedback form database. The Delay - Process Buffer node introduces a processing delay to ensure data is fully processed before sending notifications, avoiding premature actions. The Send Email To That New User node automatically sends a feedback form email to the newly registered user. When a user submits their trip feedback, the Trigger - Trip Form Submission node captures the submission. The Tack All Feedback Item node iterates over each form submission item to process multiple entries if present, ensuring all feedback data is handled. The Update - Trip Feedback Sheet node appends or updates the trip feedback data in the Google Sheets, maintaining an organized record of all responses. How to use Import the workflow into n8n and configure the nodes with your Google Sheets API credentials and email service settings. Set up your Google Sheets with the appropriate columns for user data and feedback responses. Test the workflow by adding a new user entry to verify email delivery and feedback processing. Requirements Google Sheets API credentials with read/write permissions Email service configuration (SMTP or email API) Access to Google Sheets containing user data and feedback forms Customising this workflow Modify the email template in the Send Email To That New User node to match your branding and feedback requirements. Adjust the delay timing in the Delay - Process Buffer node based on your system's processing needs. Customize the Google Sheets structure and update the Update - Trip Feedback Sheet node accordingly to match your data organization preferences.
by Henry
Who is this for? This workflow is ideal for social media managers, content creators, marketing teams, and automation enthusiasts looking to streamline their Instagram Reels posting from Google Drive using n8n, Google Sheets, and Cloudinary. What problem is this workflow solving? / Use case Manually downloading video files, uploading to third-party platforms, and posting to Instagram Reels is time-consuming. This workflow automates the whole process, ensuring timely, consistent content delivery and reducing manual errors. What this workflow does Automatically fetches scheduled Reel content from Google Sheets (Sample link) Downloads video files from Google Drive folders Uploads videos to Cloudinary for hosting Posts the videos as Instagram Reels with custom captions Updates the Google Sheet to mark content as posted Setup Prepare a Google Drive folder set to public sharing for your videos Create a Cloudinary account and configure upload presets Connect an Instagram Business account (linked to a Facebook Page) Set up a Google Sheet with video post details: Video Name, Type, Caption, Status Configure the workflow schedule in n8n How to customize this workflow to your needs Adjust the schedule for desired posting frequency Add fields to your sheet for custom tags or content variations Change the Cloudinary or Instagram settings for different media types Integrate additional steps for error handling or approval workflows
by Rex Lui
Easily generate QR codes from any URL! This workflow lets users submit a URL via a simple form and instantly receive a downloadable QR code image—perfect for quick sharing or promotions. Setup is fast and user-friendly, so you’ll be up and running in minutes! 🚀 How it works The end user submits a URL through a simple online form. The workflow automatically sends the submitted URL to a QR code generation API. The user receives a downloadable QR code image corresponding to their URL. ⚙️ Setup instruction Import Workflow: Click "Import from JSON" in your n8n environment and paste the provided workflow JSON. Click "Save" and activate the workflow. Double click the "On form submission" node to obtain the production URL. You may now use this URL to do QR code generation.
by Aadarsh Jain
Who is this for? This workflow is designed for DevOps engineers, platform engineers, and Kubernetes administrators who want to interact with their Kubernetes clusters through natural language queries in n8n. It's perfect for teams who need quick cluster insights without memorizing complex kubectl commands or switching between multiple cluster contexts manually. How it works? The workflow operates in three intelligent stages: Cluster Discovery & Context Switching - Automatically lists available clusters from your kubeconfig and switches to the appropriate cluster based on your natural language query Command Generation - Uses GPT-4o to analyze your request and generate the correct kubectl command with proper flags, selectors, and output formatting Command Execution - Executes the generated kubectl command against your selected cluster and returns the results The workflow supports multi-cluster environments and can handle queries like: "Show me all pods in production cluster" "List failing deployments in production" "Get pod details in kube-system namespace" Setup Clone the MCP Server git clone https://github.com/aadarshjain/kubectl-mcp-server cd kubectl-mcp-server Configure your kubeconfig - Ensure your ~/.kube/config contains all the clusters you want to access Set up MCP STDIO credentials in n8n Command: /full/path/to/python-package Arguments: /full/path/to/kubectl-mcp-server/server.py Import the workflow into your n8n instance Configure OpenAI credentials for the GPT-4o models Test the workflow using the chat interface with queries like "show pods in [cluster-name]"
by Angel Menendez
Workflow Description Who is this for? This workflow is designed for sales and revenue teams using Gong and Salesforce to track and analyze sales calls. It helps automate the extraction, filtering, and preprocessing of Gong call data for further AI analysis. What problem is this solving? Sales teams often generate large amounts of call data, but not all calls are relevant for deeper analysis. This workflow filters calls based on predefined criteria, extracts relevant metadata, and formats the data before passing it to an AI processing pipeline. What this workflow does Triggers on new Gong calls synced to Salesforce** every hour. Filters calls based on opportunity stage** (Discovery or Meeting Booked). Retrieves Gong call details** via API. Formats call data into a structured JSON object** for AI processing. Passes the structured data to a Gong Call Preprocessor workflow** for further insights. Setup Ensure that you have connected Salesforce and Gong APIs with valid credentials. Modify the Salesforce query in Get all custom Salesforce Gong Objects to match your organization’s requirements. Set the schedule trigger interval in the Run Hourly node if needed. Connect this workflow to an AI processing workflow to analyze call transcripts. Workflow Templates: CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion How to customize Change filtering logic: Adjust the **opportunity stage filter (Check if Opportunity Stage is Meeting Booked or Discovery) to match your sales process. Modify data formatting**: Add or remove fields in the Format call into correct JSON Object node to customize the output. Adjust trigger frequency**: Change the Run Hourly node to run at a different interval if required.
by Piotr Sobolewski
How it works This advanced workflow transforms your long-form audio content (like podcast episodes or webinar recordings) into digestible, ready-to-use marketing assets. It's designed for podcasters, content creators, and marketers who want to maximize their content's reach. It automatically: Takes a full transcript of your audio/video content as input. Generates a concise, comprehensive summary of the episode using advanced AI. Extracts a list of key topics and keywords from the transcript, perfect for SEO, tagging, and content categorization. Delivers the summary and keywords directly to your inbox or a connected tool for easy access. Streamline your content repurposing pipeline and unlock new value from your audio and video assets with intelligent automation! Set up steps Setting up this powerful workflow typically takes around 20-30 minutes, as it involves multiple AI steps. You'll need to: Obtain API keys for your preferred AI service (e.g., OpenAI, Google AI). Have access to a method for generating transcripts from your audio/video (e.g., manually pasting, or using a separate transcription service like AssemblyAI, Whisper, etc.). Connect your preferred email service (e.g., Gmail) to receive the output. All detailed setup instructions and specific configuration guidance are provided within the workflow itself using sticky notes.
by Danger
How it Works This meta-workflow is designed to intelligently scan all your active workflows in n8n, identify those that contain Webhook nodes, and automatically generate a Swagger (OpenAPI) specification based on them. The output Swagger document reflects all accessible endpoints from your Webhook nodes, making it easier to: Visualize your API structure Share your endpoints Integrate with tools like Postman or Swagger UI Enhanced Parameter Support If you want the Swagger to reflect request parameters (e.g., query or body fields), you can annotate your Webhook nodes using the Note section. When configured properly, these annotations enrich your Swagger documentation with parameter names, types, and descriptions. Setup Steps Add the WebhookDocs to n8n Import the WebhookDocs JSON file into your n8n instance. Activate the WebhookDocs (you can also use the test-endpoint) Annotate Webhook Nodes (Optional but Recommended) To enable parameter documentation, open the Note section of each Webhook node and add annotations in the following format: //@body field_name string description //@query field_name string description Open the page https://n8n.youristance.com/webhook/swagger
by AI/ML API | D1m7asis
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. n8n Workflow Template: AI‑Powered Mental Health Support Bot Overview: This template enables you to build a Telegram bot that delivers real‑time, empathetic mental health support. Incoming messages tagged with #vent, #insight, or #cope are routed to GPT‑4o via the AI/ML API, which returns tailored, compassionate responses. How it works: Telegram Trigger listens for new chat messages or voice notes. Show Typing Indicator immediately signals “typing…” in the chat. Switch Node examines the text prefix and routes to one of four branches (Vent, Insight, Cope, or default). Set Prompt nodes build a JSON payload with a specific role‑play prompt for each branch. AI/ML API node (model gpt-4o) generates the response. Telegram node sends the AI’s answer back to the user. Setup Steps: Connect your Telegram bot token in the Telegram credentials. Add your AI/ML API key (GPT‑4o) in n8n’s credential settings. Activate the workflow and deploy your n8n instance webhook URL to BotFather. Test by sending #vent I’m stressed, #insight Why do I feel…, or any tag in your Telegram chat. This plug‑and‑play workflow brings AI‑driven emotional support directly into Telegram.
by n8n Team
This workflow sends the contents of an email to a Notion database. The email must be labeled with a specific label for the workflow to trigger. The email subject will be the title of the Notion page, and a snippet of the email body will be the content of the Notion page. The email link will be added to the Notion page as a property. Prerequisites Notion account and Notion credentials. Google account and Google credentials. How it works On scheduled intervals, find all emails with a specific label. For each email, check if the email already exists in the Notion database. If it does not exist, create a new page in the Notion database, otherwise do nothing. When the task in the Notion database is checked off, the label will be removed from the email. Setup This workflow requires that you set up a Notion database or use an existing one with at least the following fields: Title (title) Thread ID (text) Email thread (URL) Additionally, create a label that will be used to trigger the workflow in Gmail. In this workflow, the label is called "Notion".
by bangank36
This workflow backup Squarespace website header and footer injections into Github How It Works The Squarespace injections are fetched when an URL is placed Setup Instructions First, edit HTTP Request's URL to put your Squarespace site URL there Next, to configure the Github, update the Globals node with the following values: repo.owner – Your GitHub username repo.name – The name of your GitHub repository storing the workflows repo.path – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and injections are stored in a squarespace-backup/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → squarespace-backup/ Each site's injections will be added into seperate folder Required Credentials GitHub API – Access to your repository Who Is This For? This template is made for Squarespace users who want to backup their header and footer injections at interval to or on demand Check out my other templates: 👉 My n8n Templates
by Vadym Nahornyi
This workflow automatically transcribes audio files, translates the content between languages, and generates natural-sounding speech from the translated text - all in one seamless process. Who's it for Content creators, educators, and businesses needing to make their audio content accessible across language barriers. Perfect for translating podcasts, voice messages, lectures, or any audio content while preserving the spoken format. How it works The workflow receives an audio file through a webhook, transcribes it using OpenAI's Whisper, translates and structures the text with GPT-4, generates new audio in the target language, and stores it in S3 for easy access. The entire process takes seconds and returns both the transcribed/translated text and a URL to the translated audio file. How to set up Configure OpenAI credentials - Add your OpenAI API key for Whisper transcription and GPT-4 translation Set up AWS S3 - Create a bucket with public read permissions for audio storage Update configuration - Replace 'YOUR-BUCKET-NAME' with your actual S3 bucket name Activate webhook - Deploy and copy your webhook URL for receiving audio files Send a POST request with: Binary audio file (as 'audiofile') Languages parameter (e.g., "English, Spanish") Requirements OpenAI API account with access to Whisper and GPT-4 AWS account with S3 bucket configured Basic understanding of webhooks and API requests How to customize Add language detection** - Automatically detect source language if not specified Customize voice settings** - Adjust speech speed, pitch, or select different voices Add file validation** - Implement size limits and format checks Enhance security** - Add webhook authentication and rate limiting Extend functionality** - Add subtitle generation or multiple output formats