by Giannis Kotsakiachidis
🏦 GoCardless ⇄ Maybe Finance — Automatic Multi-Bank Sync & Weekly Overview 💸 Who’s it for 🤔 Freelancers, founders, households, and side-hustlers who work with several bank accounts but want one, always-up-to-date budget inside Maybe Finance—no more CSV exports or copy-paste. How it works / What it does ⚙️ Schedule Trigger (cron) fires every Monday 📅 (switch to Manual Trigger while testing) Get access token — fresh 24 h GoCardless token 🔑 Fetch transactions for each account: Revolut Pro Revolut Personal ABN AMRO (add extra HTTP Request nodes for any other GoCardless-supported banks) Extract booked — keep only settled items 🗂️ Set transactions … — map every record to Maybe Finance’s schema 📝 Merge all arrays into one payload 🔄 Create transactions to Maybe — POSTs each item via API 🚀 Resend Email — sends you a “Weekly transactions overview” 📧 All done in a single run — your Maybe dashboard is refreshed and you get an inbox alert. How to set up 🛠️ Import the template into n8n (cloud or self-hosted). Create credentials GoCardless secret_id & secret_key Maybe Finance API key (Optional) Resend API key for email notifications One-time GoCardless config (run the blocks on the left): /token/new/ → obtain token /institutions → find institution IDs /agreements/enduser/ → create agreements /requisitions/ → get the consent URL & finish bank login /requisitions/{id} → copy the GoCardless account_ids Create the same accounts in Maybe Finance and run the HTTP GET request in the purple frame and copy their account_ids. Open each Set transactions … node and paste the correct Maybe account_id. Adjust the Schedule Trigger (e.g. daily, monthly). Save & activate 🎉 Requirements 📋 n8n 1.33 + GoCardless app (secret ID & key, live or sandbox) Maybe Finance account & API key (Optional) Resend account for email How to customize ✨ Include pending transactions**: change the Item Lists filter. Add more banks**: duplicate the “Get … transactions” → “Extract booked” → “Set transactions” path and plug its output into the Merge node. Different interval**: edit the cron rule in Schedule Trigger. Disable emails**: just remove or deactivate the Resend node. Send alerts to Slack / Teams**: branch after the Merge node and add a chat node. Happy budgeting! 💰
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.
by Nadia Privalikhina
This n8n template offers a free and automated way to convert images from a Google Drive folder into a single PDF document. It uses Google Slides as an intermediary, allowing you to control the final PDF's page size and orientation. If you're looking for a no-cost solution to batch convert images to PDF and need flexibility over the output dimensions (like A4, landscape, or portrait), this template is for you! It's especially handy for creating photo albums, visual reports, or simple portfolios directly from your Google Drive. How it works The workflow first copies a Google Slides template you specify. The page setup of this template (e.g., A4 Portrait) dictates your final PDF's dimensions. It then retrieves all images from a designated Google Drive folder, sorts them by creation date. Each image is added to a new slide in the copied presentation. Finally, the entire Google Slides presentation is converted into a PDF and saved back to your Google Drive. How to use Connect your Google Drive and Google Slides accounts in the relevant nodes. In the "Set Pdf File Name" node, define the name for your output PDF. In the "CopyPdfTemplate" node: Select your Google Slides template file (this sets the PDF page size/orientation). Choose the Google Drive folder containing your source images. Ensure your images are in the specified folder. For best results, images should have an aspect ratio similar to your chosen Slides template. Run the workflow to generate your PDF by clicking 'Test Workflow' Requirements Google Drive account. Google Slides account. Google Slides Template stored on your Google Drive Customising this workflow Adjust the "Filter: Only Images" node if you use image formats other than PNG (e.g., image/jpeg for JPGs). Modify the image sorting logic in the "Sort by Created Date" node if needed.
by Lucas Peyrin
How it works Ever wonder how to make your workflows smarter? How to handle different types of data in different ways? This template is a hands-on tutorial that teaches you the three most fundamental nodes for controlling the flow of your automations: Merge, IF, and Switch. To make it easy to understand, we use a simple package sorting center analogy: Data Items** are packages on a conveyor belt. The Merge Node is where multiple conveyor belts combine into one. The IF Node is a simple sorting gate with two paths (e.g., "Fragile" or "Not Fragile"). The Switch Node is an advanced sorting machine that routes packages to many different destinations. This workflow takes you on a step-by-step journey through the sorting center: Creating Packages: Three different "packages" (two letters and one parcel) are created using Set nodes. Merging: The first Merge node combines all three packages onto a single conveyor belt so they can be processed together. Simple Sorting: An IF node checks if a package is fragile. If true, it's sent down one path; if false, it's sent down another. Re-Grouping: After being processed separately, another Merge node brings the packages back together. This "Split > Process > Merge" pattern is a critical concept in n8n! Advanced Sorting: A Switch node inspects each package's destination and routes it to the correct output (London, New York, Tokyo, or a Default bin). By the end, you'll see how all packages have been correctly sorted, and you'll have a solid understanding of how to build intelligent, branching logic in your own workflows. Set up steps Setup time: 0 minutes! This template is a self-contained tutorial and requires zero setup. There are no credentials or external services to configure. Simply click the "Execute Workflow" button. Follow the flow from left to right, clicking on each node to see its output and reading the detailed sticky notes to understand what's happening at each stage.
by Jean-Marie Rizkallah
🧩 Jamf Smart Group Membership to Slack Automatically export Jamf smart group membership to Slack in CSV format. Perfect for IT and security teams who need fast visibility into device grouping—without manually logging into Jamf. Slack automatically parses the CSV, making it viewable directly in the chat—no download required. ✅ Prerequisites • A Jamf Pro API key with permissions to read smart groups and computer details • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or connect it to a webhook • Fetch the list of smart group IDs (set manually in the workflow) • Loop over each group to get its members • Use a sub-workflow to fetch detailed info for each device • Convert the member list to CSV • Post the CSV file to a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your Jamf BaseURL and group IDs in the Set nodes • Add your Jamf Pro API credentials to the HTTP Request nodes • Provide your Slack webhook token or channel ID in the Slack node • Optional: Customize CSV fields or formatting as needed
by David Ashby
Complete MCP server exposing all AWS Transcribe Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every AWS Transcribe Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n AWS Transcribe Tool tool with full error handling 📋 Available Operations (4 total) Every possible AWS Transcribe Tool operation is included: 🔧 Transcriptionjob (4 operations) • Create a transcription job • Delete a transcription job • Get a transcription job • Get many transcription jobs 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native AWS Transcribe Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every AWS Transcribe Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Adrian Bent
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
by Lucas Walter
Reverse engineer short-form videos from Instagram and TikTok using Gemini AI Who's it for Content creators, AI video enthusiasts, and digital marketers who want to analyze successful short-form videos and understand their production techniques. Perfect for anyone looking to reverse-engineer viral content or create detailed prompts for AI video generation tools like Google Veo or Sora. How it works This automation takes any Instagram Reel or TikTok URL and performs a forensic analysis of the video content. The workflow downloads the video, converts it to base64, and uses Google's Gemini 2.5 Pro vision API to generate an extremely detailed "Generative Manifest" - a comprehensive prompt that could be used to recreate the video with AI tools. The analysis includes: Visual medium identification (film stock, camera sensor, lens characteristics) Color grading and lighting breakdown Shot-by-shot deconstruction with precise timing Camera movement and framing details Subject description and action choreography Environmental and atmospheric details How to set up Configure API credentials: Add your Apify API key for video scraping Set up Google Gemini API authentication Set up Slack integration (optional): Configure Slack OAuth for result sharing Update the channel ID where results should be posted Access the form: The workflow creates a web form where you can input video URLs Form accepts both Instagram Reel and TikTok URLs Requirements Apify account** with API access for video scraping Google Cloud account** with Gemini API enabled Slack workspace** (optional, for sharing results) Videos must be publicly accessible (no private accounts) How to customize the workflow Modify the analysis prompt:** Edit the "set_base_prompt" node to adjust the depth and focus of the video analysis Add different platforms:** Extend the switch node to handle other video platforms Integrate with other tools:** Replace Slack with email, Discord, or other notification systems
by Jimleuk
This n8n template allows you to use AI to generate logos or images which mimic visual styles of other logos or images. The model used to generate the images is Google's Imagen 3.0. With this template, users will be able to automate design and marketing tasks such as creating variants of existing designs, remixing existing assets to validate different styles and explore a range of designs which would have been otherwise too expensive and time-consming previously. How it works A form trigger is used to capture the source image to reference styles from and a prompt for the target image to generate. The source image is passed to Gemini 2.0 to be analysed and its visual style and tone extracted as a detailed description. This visual style description is then combined with the user's initial target image prompt. This final prompt is given to Imagen 3.0 to generate the images. A quick webpage is put together with the generated images to present back to the user. If the user provided an email address, a copy of this HTML page will be sent. How to use Ensure the workflow is live to share the form publicly. The source image must be accessible to your n8n instance - either a public image of the internet or within your network. For best results, select a source image which has strong visual identity as these will allow the LLM to better describe it. For your prompt, refer to the imagen prompt guide found here: https://ai.google.dev/gemini-api/docs/image-generation#imagen-prompt-guide Requirements Gemini for LLM and Imagen model. Cloudinary for image CDN. Gmail for email sending. Customising this workflow Feel free to swap any of these out for tools and services you prefer. Want to fully automate? Switch the form trigger for a webhook trigger!
by Audun
A reusable and production-ready n8n workflow that secures public webhooks using Bearer Token authentication and dynamic request validation. ✨ What It Does Verifies Bearer Token** Compares the Authorization header with a configured secret token. Validates Required Fields** Checks that all expected fields are present in the incoming request body. Returns Standardized JSON Responses** 401 Unauthorized if token is missing or invalid 400 Bad Request if required fields are missing 200 OK with a custom success payload 👤 Who It’s For Developers exposing n8n workflows as APIs No-code/low-code builders integrating with external forms or tools Anyone needing simple authentication and validation on incoming webhooks 💡 Why Use It 🔒 Secure: Prevents unauthorized access to your public workflows 🧼 Clean: Centralized configuration for token and required fields ⚙️ Flexible: Easy to extend and customize for any use case 🛠 Setup Instructions Configure Values in the Configuration Node Set your secret token: config.bearerToken = YOUR_TOKEN Define required request fields by key: Example: config.requiredFields.message = true; config.requiredFields.email = true; ✅ Only the keys matter – values can be anything. Plug in Your Business Logic Replace the "Add workflow nodes here" with your own logic. Customize the Success Response Edit the Create Response node to shape your success payload. 🧪 Use Cases Securing public form submissions Creating internal API endpoints Validating data from external services 📌 Use this as a base for building secure, API-style workflows in n8n. 👋 Hello! I'm Audun / xqus If my n8n workflows saved you time or sparked ideas, consider sending a little support my way. It helps me keep building cool stuff — and maybe grab a coffee ☕ along the way!
by Julian Kaiser
How it works Many users have asked in the support forum about different methods to analyze images and PDF documents with Google Gemini AI in n8n. This workflow answers that question by demonstrating five different approaches: Single image with auto binary passthrough - The simplest approach using AI Agent's automatic binary handling Multiple images with predefined prompts - For customized analysis with different instructions per image Native n8n item-by-item processing - For handling multiple items using n8n's standard workflow paradigm PDF analysis via direct API - For document analysis and text extraction Image analysis via direct API - For direct control over API parameters Each method has advantages depending on your specific use case, data volume, and customization needs. Set up steps Setup time: ~5-10 minutes You'll need: A Google Gemini API key n8n with HTTP Request and AI Agent nodes Important: For the HTTP Request nodes making direct API calls to Gemini (Methods 3, 4, and 5), you'll need to set up Query Authentication with your Gemini API key. Add a parameter named "key" with your API key value in the Query Auth section of these nodes. I'll updated this if I find better ways. Also let me know if you know other ways. Eager to learn :)
by Lucas Peyrin
How it works This template is a powerful, reusable utility for managing stateful, long-running processes. It allows a main workflow to be paused indefinitely at "checkpoints" and then be resumed by external, asynchronous events. This pattern is essential for complex automations and I often call it the "Async Portal" or "Teleport" pattern. The template consists of two distinct parts: The Main Process (Top Flow): This represents your primary business logic. It starts, performs some actions, and then calls the Portal to register itself before pausing at a Wait node (a "Checkpoint"). The Async Portal (Bottom Flow): This is the state-management engine. It uses Workflow Static Data as a persistent memory to keep track of all paused processes. When an external event (like a new chat message or an approval webhook) comes in with a specific session_id, the Portal looks up the corresponding paused workflow and "teleports" the new data to it by calling its unique resume_url. This architecture allows you to build sophisticated systems where the state is managed centrally, and your main business logic remains clean and easy to follow. When to use this pattern This is an advanced utility ideal for: Chatbots:** Maintaining conversation history and context across multiple user messages. Human-in-the-Loop Processes:** Pausing a workflow to wait for a manager's approval from an email link or a form submission. Multi-Day Sequences:** Building user onboarding flows or drip campaigns that need to pause for hours or days between steps. Any process that needs to wait for an unpredictable external event** without timing out. Set up steps This template is a utility designed to be copied into your own projects. The workflow itself is a live demonstration of how to use it. Copy the Async Portal: In your own project, copy the entire Async Portal (the bottom flow, starting with the A. Entry: Receive Session Info trigger) into your workflow. This will be your state management engine. Register Your Main Process: At the beginning of your main workflow, use an Execute Workflow node to call the Portal's trigger. You must pass it a unique session_id for the process and the resume_url from a Wait node. Add Checkpoints: Place Wait nodes in your main workflow wherever you need the process to pause and wait for an external event. Trigger the Portal: Configure your external triggers (e.g., your chatbot's webhook) to call the Portal's entry trigger, not your main workflow's trigger. You must pass the same session_id so the Portal knows which paused process to resume. To see it in action, follow the detailed instructions in the "How to Test This Workflow" sticky note on the canvas.