by Avkash Kakdiya
How it works This workflow automates LinkedIn community engagement by monitoring post comments, filtering new ones, generating AI-powered replies, and posting responses directly on LinkedIn. It also logs all interactions into Google Sheets for tracking and analytics. Step-by-step Trigger & Fetch A Schedule Trigger runs the workflow every 10 minutes. The workflow fetches the latest comments on a specific LinkedIn post using LinkedIn’s API with token-based authentication. Filter for New Comments Retrieves the timestamp of the last processed comment from Google Sheets. Filters out previously handled comments, ensuring only fresh interactions are processed. AI-Powered Reply Generation Sends the new comment to OpenAI GPT-3.5 Turbo with a structured prompt. AI generates a professional, concise, and engaging LinkedIn-appropriate reply (max 2–3 sentences). Post Back to LinkedIn Automatically posts the AI-generated reply under the original comment thread. Maintains consistent formatting and actor identity. Data Logging Appends the original comment, AI response, and metadata into Google Sheets. Enables tracking, review, and future engagement analysis. Benefits Saves time by automating LinkedIn comment replies. Ensures responses are timely, professional, and on-brand. Maintains authentic engagement without manual effort. Prevents duplicate replies by filtering with timestamps. Creates a structured log in Google Sheets for auditing and analytics.
by Max aka Mosheh
How it works • Webhook triggers from content creation system in Airtable • Downloads media (images/videos) from Airtable URLs • Uploads media to Postiz cloud storage • Schedules or publishes content across multiple platforms via Postiz API • Tracks publishing status back to Airtable for reporting Set up steps • Sign up for Postiz account at https://postiz.com/?ref=max • Connect your social media channels in Postiz dashboard • Get channel IDs and API key from Postiz settings • Add Postiz API key to n8n credentials (Header Auth) • Update channel IDs in "Prepare for Publish" node • Connect Airtable with your content database • Customize scheduling times per platform as needed • Full setup details in workflow sticky notes
by Robert Breen
Send VAPI voice requests into n8n with memory and OpenAI for conversational automation This template shows how to capture voice interactions from VAPI (Voice AI Platform), send them into n8n via a webhook, process them with OpenAI, and maintain context with memory. The result is a conversational AI agent that responds back to VAPI with short, business-focused answers. ✅ What this template does Listens for POST requests from VAPI containing the session ID and user query Extracts session ID and query for consistent conversation context Uses OpenAI (GPT-4.1-mini) to generate conversational replies Adds Memory Buffer Window so each VAPI session maintains history Returns results to VAPI in the correct JSON response format 👤 Who’s it for Developers and consultants building voice-driven assistants Businesses wanting to connect VAPI calls into automation workflows Anyone who needs a scalable voice → AI → automation pipeline ⚙️ How it works Webhook node catches incoming VAPI requests Set node extracts session_id and user_query from the request body OpenAI Agent generates short, conversational replies with your business context Memory node keeps conversation history across turns Respond to Webhook sends results back to VAPI in the required JSON schema 🔧 Setup instructions Step 1: Create Function Tool in VAPI In your VAPI dashboard, create a new Function Tool Name: send_to_n8n Description: Send user query and session data to n8n workflow Parameters: session_id (string, required) – Unique session identifier user_query (string, required) – The user’s question Server URL: https://your-n8n-instance.com/webhook/vapi-endpoint Step 2: Configure Webhook in n8n Add a Webhook node Set HTTP method to POST Path: /webhook/vapi-endpoint Save, activate, and copy the webhook URL Use this URL in your VAPI Function Tool configuration Step 3: Create VAPI Assistant In VAPI, create a new Assistant Add the send_to_n8n Function Tool Configure the assistant to call this tool on user requests Test by making a voice query — you should see n8n respond 📦 Requirements An OpenAI API key stored in n8n credentials A VAPI account with access to Function Tools A self-hosted or cloud n8n instance with webhook access 🎛 Customization Update the system prompt in the OpenAI Agent node to reflect your brand’s voice Swap GPT-4.1-mini for another OpenAI model if you need longer or cheaper responses Extend the workflow by connecting to CRMs, Slack, or databases 📬 Contact Need help customizing this (e.g., filtering by campaign, connecting to CRMs, or formatting reports)? 📧 rbreen@ynteractive.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Oneclick AI Squad
This workflow automatically replies to new comments on your Instagram posts using smart AI. It checks your recent posts, finds unread comments, and skips spam or duplicates. The AI reads the post and comments to create a friendly, natural reply with emojis. It posts the reply instantly and logs everything so you can track engagement. Perfect for busy creators — stays active 24/7 without you lifting a finger! What It Monitors Recent Instagram Posts**: Fetches the latest posts based on your account activity. New Comments**: Detects unreplied comments in real time. Reply Eligibility**: Filters spam, duplicates, and already replied comments. AI-Generated Responses**: Creates personalized, engaging replies using post context. Features Runs on a schedule trigger (High traffic: 2–3 min | Medium: 5 min | Low: 10+ min). Fetches recent posts and their comments via Instagram Graph API. Context-aware AI replies** using post caption + comment content. Spam & duplicate filtering** to avoid unwanted or repeated replies. Tone-friendly & emoji-rich** responses for higher engagement. Logs every reply** with metadata (post ID, comment ID, timestamp). Workflow Steps | Node Name | Description | |---------|-----------| | Schedule Trigger | Triggers workflow based on traffic level (2–10 min intervals). | | Get Recent Posts | Fetches recent posts using Instagram Graph API. Returns post IDs needed to fetch comments. | | Split Posts | Splits batch of posts into individual items for parallel processing. | | Get Comments | For each post, retrieves comments with content, username, timestamp, like count. | | Split Comments | Splits comments into individual items for granular processing. | | Add Post Context | Combines comment + original post caption to generate relevant replies. | | Check if Replied | Checks if AI has already replied to this comment (prevents duplicate replies). | | Not Replied Yet? | Routes only unreplied comments forward. | | Spam Filter | Filters out spam using: • Spam keywords • Empty/one-word comments • Excessive emojis • Known spam patterns | | Should Reply? | Final logic gate: • If reply key exists → Skip • If spam → Skip • Else → Proceed | | Generate AI Reply | Uses OpenAI (or compatible LLM): • Input: Post caption + comment • Tone: Friendly & engaging • Max tokens: 150 • Temperature: 0.8 (creative) | | Post Reply | Posts AI-generated reply via Instagram API: • Method: POST • Body: message parameter • TTL: 30 days | | Mark As Replied | Updates internal tracking to prevent duplicate replies. | | Log Reply | Logs full reply details: • Post ID • Comment ID • Username • Reply text • Timestamp • Used for analytics & reporting | How to Use Copy the JSON configuration of the workflow. Import it into your n8n workspace. Configure Instagram Graph API credentials (Business/Creator Account required). Set up OpenAI API key in the Generate AI Reply node. Activate the workflow. Monitor replies in Instagram and execution logs in n8n. > The bot will only reply once per comment, skip spam, and use full post context for natural responses. Requirements n8n** account and self-hosted or cloud instance. Instagram Business or Creator Account** with Graph API access. Facebook App** with pages_read_engagement, pages_manage_comments permissions. OpenAI API key** (or compatible LLM endpoint). Valid access token with long-lived permissions. Customizing this Workflow Change Schedule Trigger interval based on post frequency (e.g., every 1 min for viral accounts). Update Spam Filter keywords list for brand-specific spam patterns. Modify Generate AI Reply prompt to match your brand voice (e.g., formal, humorous, Gen-Z). Adjust Temperature (0.5 = consistent, 1.0 = creative) and Max Tokens. Replace OpenAI with Claude, Gemini, or local LLM via HTTP request. Add approval step (manual review) before posting replies. Export logs to Google Sheets, Airtable, or database for analytics. Explore More AI Workflows: https://www.oneclickitsolution.com/contact-us/
by Camille Roux
Create a reusable “photos to post” queue from your Lightroom Cloud album—ideal for Lightroom-to-Instagram automation with n8n. It discovers new photos, stores clean metadata in a Data Table, and generates AI alt text to power on-brand captions and accessibility. Use it together with “Lightroom Image Webhook (Direct JPEG for Instagram)” and “Instagram Auto-Publisher for Lightroom Photos (AI Captions).” What it’s for Automate Lightroom to Instagram; centralize photo data for scheduled IG posting; prep AI-ready alt text and metadata for consistent, hands-free publishing. Parameters to set Lightroom Cloud credentials (client/app + API key) Album/collection ID to monitor in Lightroom Cloud Data Table name for the posting queue (e.g., Photos) AI settings: language/tone for alt text (concise, brand-aware) Image analysis URL: public endpoint of Workflow 2 (Lightroom Image Webhook) Works best with Workflow 2: Lightroom Image Webhook (Direct JPEG for Instagram) Workflow 3: Instagram Auto-Publisher for Lightroom Photos (AI Captions) Learn more & stay in the loop Want the full story (decisions, trade-offs, and tips) behind this Lightroom Cloud → Instagram automation? 👉 Read the write-up on my blog: camilleroux.com If you enjoy street & urban photography or you’re curious how I use these n8n workflows day-to-day: 👉 Follow my photo account on Instagram: @camillerouxphoto 👉 Follow me on other networks: links available on my site (X, Bluesky, Mastodon, Threads)
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Asfandyar Malik
Short Description Automatically scrape new Upwork job listings, save them to Google Sheets, and get real-time WhatsApp alerts when new matching jobs appear. This workflow helps freelancers and agencies track new opportunities instantly — without checking Upwork manually. Who’s it for For freelancers, agencies, and automation enthusiasts who want to monitor Upwork jobs automatically and receive instant notifications for relevant projects. How it works This workflow connects with RapidAPI to fetch new Upwork job listings, filters relevant ones, stores them in a Google Sheet, and sends WhatsApp alerts for matching results. It includes: Trigger node** for scheduled or webhook-based execution HTTP Request node** connected to RapidAPI for scraping Google Sheets node** to store job data Filter (IF) node** to select relevant jobs WhatsApp API node** to send alerts automatically How to set up Get an API key from RapidAPI and subscribe to an Upwork scraper API. Create a Google Sheet with columns like Title, Budget, Category, Link, and Description. Connect your Google account to n8n using Google Sheets credentials. Set up your WhatsApp API endpoint (e.g., via Waha API or WhatsApp Cloud API). Paste your API keys into the HTTP Request nodes and test the workflow. Schedule the workflow to run automatically (e.g., every hour or once daily). Requirements RapidAPI account (for Upwork scraper API) Google Sheets account WhatsApp API access (Waha / Cloud API) n8n cloud or self-hosted instance How to customize You can modify this workflow to: Track specific job categories or keywords (e.g., “automation”, “AI”, “n8n”) Send alerts to Telegram, Discord, or Slack instead of WhatsApp Add budget or client rating filters for higher-quality job leads Connect it with Airtable or Notion for advanced job tracking
by Automate With Marc
🤖 Telegram Image Editor with Nano Banana Send an image to your Telegram bot, and this workflow will automatically enhance it with Google’s Nano Banana (via Wavespeed API), then return the polished version back to the same chat—seamlessly. 👉 Watch step-by-step video tutorials of workflows like these on www.youtube.com/@automatewithmarc What it does Listens on Telegram for incoming photo messages Downloads the file sent by the user Uploads it to Google Drive (temporary storage for processing) Sends the image to Nano Banana API with a real-estate style cleanup + enhancement prompt Polls until the job is complete (handles async processing) Returns the edited image back to the same Telegram chat Perfect for Real-estate agents previewing polished property photos instantly Social media managers editing on-the-fly from Telegram Anyone who wants “send → cleaned → returned” image flow without manual edits Apps & Services Telegram Bot API (Trigger + Send/Receive files) Google Drive (Temporary file storage) Wavespeed / Google Nano Banana (AI-powered image editing) Setup Connect your Telegram Bot API token in n8n. Add your Wavespeed API key for Nano Banana. Link your Google Drive account (temporary storage). Deploy the workflow and send a test photo to your Telegram bot. Customization Adjust the Nano Banana prompt for different styles (e.g., ecommerce cleanup, portrait retouching, color correction). Replace Google Drive with another storage service if preferred. Add logging to Google Sheets or Airtable to track edits.
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o
by M Sayed
Automate Egyptian gold and currency price monitoring with beautiful Telegram notifications! 🚀 This workflow scrapes live gold prices and official exchange rates from the Egyptian market every hour and sends professionally formatted updates to your Telegram channel/group. ✨ Features: 🕐 Smart Scheduling: Runs hourly between 10 AM - 10 PM (Cairo timezone) 🥇 Gold Prices: Tracks different gold types with buy/sell rates 💱 Currency Rates: Official exchange rates (USD, EUR, SAR, AED, GBP, etc.) 🎨 Beautiful Formatting: Emoji-rich messages with proper Arabic text formatting ⚡ Reliable: Built-in retry mechanisms and error handling 🇪🇬 Localized: Tailored specifically for the Egyptian market
by Easy8.ai
Automated Helpdesk Ticket Alerts to Microsoft Teams from Easy Redmine Intro/Overview This workflow automatically posts a Microsoft Teams message whenever a new helpdesk ticket is created in Easy Redmine. It’s perfect for IT teams who want real-time visibility into new issues without constantly checking ticket queues or inboxes. By integrating Easy Redmine with Teams, this setup ensures tickets are discussed and resolved faster, improving both response and resolution times. How it works Catch Easy Webhook – New Issue Created Triggers whenever Easy Redmine sends a webhook for a newly created ticket. Uses the webhook URL generated from Easy Redmine’s webhook settings. Get a new ticket by ID Fetches full ticket details (subject, priority, description) via the Easy Redmine API using the ticket ID from the webhook payload. Pick Description & Create URL to Issue Extracts the ticket description. Builds a direct link to the ticket in Easy Redmine for quick access. AI Agent – Description Processing Uses an AI model to summarize the ticket and suggest possible solutions based on the issue description. MS Teams Message to Support Channel Formats and sends the ticket details, priority, summary, and issue link into a designated Microsoft Teams channel. Uses the Teams message layout for clarity and quick scanning. How to Use Import the workflow into your n8n instance. Set up credentials: Easy Redmine API credentials with permission to read helpdesk tickets. Microsoft Teams credentials for posting messages to a channel. Configure Easy Redmine webhook To trigger on ticket creation events. Insert n8n webhook URL to your active Easy Redmine Webhook which can be created at https://easy-redmine-application.com/easy_web_hooks Adjust node settings: In the webhook node, use your Easy Redmine webhook URL. In the “Get a new ticket by ID” node, insert your API endpoint and authentication. In the Teams message node, select the correct Teams channel. Adjust timezone or scheduling if your team works across different time zones. Test the workflow by creating a sample ticket in Easy Redmine and confirming that it posts to Teams. Example Use Cases IT Helpdesk**: Notify the support team immediately when new issues are logged. Customer Support Teams**: Keep the entire team updated on urgent tickets in real time. Project Teams**: Ensure critical bug reports are shared instantly with the right stakeholders. Requirements Easy Redmine application Easy Redmine technical user for API calls with “read” permissions on tickets Microsoft Teams technical user for API calls with “post message” permissions Active n8n instance Customization Change the AI prompt to adjust how summaries and solutions are generated. Modify the Teams message format (e.g., bold priority, add emojis for urgency). Add filters so only high-priority or specific project tickets trigger notifications. Send alerts to multiple Teams channels based on ticket type or project. Workflow Improvement Suggestions: Rename nodes for clarity (e.g., “Fetch Ticket Details” instead of “get-one-issue”). Ensure no private ticket data is exposed beyond intended recipients. Add error handling for failed API calls to avoid missing ticket alerts.
by Trung Tran
AI-Powered AWS S3 Manager with Audit Logging in n8n (Slack/ChatOps Workflow) > This n8n workflow empowers users to manage AWS S3 buckets and files using natural language via Slack or chat platforms. Equipped with an OpenAI-powered Agent and integrated audit logging to Google Sheets, it supports operations like listing buckets, copying/deleting files, managing folders, and automatically records every action for compliance and traceability. 👥 Who’s it for This workflow is built for: DevOps engineers who want to manage AWS S3 using natural chat commands. Technical support teams interacting with AWS via Slack, Telegram, etc. Automation engineers building ChatOps tools. Organizations that require audit logs for every cloud operation. Users don’t need AWS Console or CLI access — just send a message like “Copy file from dev to prod”. ⚙️ How it works / What it does This workflow turns natural chat input into automated AWS S3 actions using an OpenAI-powered AI Agent in n8n. 🔁 Workflow Overview: Trigger: A user sends a message in Slack, Telegram, etc. AI Agent: Interprets the message Calls one of 6 S3 tools: ListBuckets ListObjects CopyObject DeleteObject ListFolders CreateFolder S3 Action: Performs the requested AWS S3 operation. Audit Log: Logs the tool call to Google Sheets using AddAuditLog: Includes timestamp, tool used, parameters, prompt, reasoning, and user info. 🛠️ How to set up Step-by-step Setup: Webhook Trigger Slack, Telegram, or custom chat platform → connects to n8n. OpenAI Agent Model: gpt-4 or gpt-3.5-turbo Memory: Simple Memory Node Prompt: Instructs agent to always follow tool calls with an AddAuditLog call. AWS S3 Nodes Configure each tool with AWS credentials. Tools: getAll: bucket getAll: file copy: file delete: file getAll: folder create: folder Google Sheets Node Sheet: AWS S3 Audit Logs Operation: Append or Update Row Columns (must match input keys): timestamp, tool, status, chat_prompt, parameters, user_name, tool_call_reasoning Agent Tool Definitions Include AddAuditLog as a 7th tool. Agent calls it immediately after every S3 action (except when logging itself). ✅ Requirements [ ] n8n instance with AI Agent feature [ ] OpenAI API Key [ ] AWS IAM user with S3 access [ ] Google Sheet with required columns [ ] Chat integration (Slack, Telegram, etc.) 🧩 How to customize the workflow | Feature | Customization Tip | |----------------------|--------------------------------------------------------------| | 🌎 Multi-region S3 | Let users include region in the message or agent memory | | 🛡️ Restricted actions| Use memory/user ID to limit delete/copy actions | | 📁 Folder filtering | Extend ListObjects with prefix/suffix filters | | 📤 Upload file | Add PutObject with pre-signed URL support | | 🧾 Extra logging | Add IP, latency, error trace to audit logs | | 📊 Reporting | Link Google Sheet to Looker Studio for audit dashboards | | 🚨 Security alerts | Notify via Slack/Email when DeleteObject is triggered |