by Max aka Mosheh
How it works • Webhook triggers from content creation system in Airtable • Downloads media (images/videos) from Airtable URLs • Uploads media to Postiz cloud storage • Schedules or publishes content across multiple platforms via Postiz API • Tracks publishing status back to Airtable for reporting Set up steps • Sign up for Postiz account at https://postiz.com/?ref=max • Connect your social media channels in Postiz dashboard • Get channel IDs and API key from Postiz settings • Add Postiz API key to n8n credentials (Header Auth) • Update channel IDs in "Prepare for Publish" node • Connect Airtable with your content database • Customize scheduling times per platform as needed • Full setup details in workflow sticky notes
by Robert Breen
Send VAPI voice requests into n8n with memory and OpenAI for conversational automation This template shows how to capture voice interactions from VAPI (Voice AI Platform), send them into n8n via a webhook, process them with OpenAI, and maintain context with memory. The result is a conversational AI agent that responds back to VAPI with short, business-focused answers. ✅ What this template does Listens for POST requests from VAPI containing the session ID and user query Extracts session ID and query for consistent conversation context Uses OpenAI (GPT-4.1-mini) to generate conversational replies Adds Memory Buffer Window so each VAPI session maintains history Returns results to VAPI in the correct JSON response format 👤 Who’s it for Developers and consultants building voice-driven assistants Businesses wanting to connect VAPI calls into automation workflows Anyone who needs a scalable voice → AI → automation pipeline ⚙️ How it works Webhook node catches incoming VAPI requests Set node extracts session_id and user_query from the request body OpenAI Agent generates short, conversational replies with your business context Memory node keeps conversation history across turns Respond to Webhook sends results back to VAPI in the required JSON schema 🔧 Setup instructions Step 1: Create Function Tool in VAPI In your VAPI dashboard, create a new Function Tool Name: send_to_n8n Description: Send user query and session data to n8n workflow Parameters: session_id (string, required) – Unique session identifier user_query (string, required) – The user’s question Server URL: https://your-n8n-instance.com/webhook/vapi-endpoint Step 2: Configure Webhook in n8n Add a Webhook node Set HTTP method to POST Path: /webhook/vapi-endpoint Save, activate, and copy the webhook URL Use this URL in your VAPI Function Tool configuration Step 3: Create VAPI Assistant In VAPI, create a new Assistant Add the send_to_n8n Function Tool Configure the assistant to call this tool on user requests Test by making a voice query — you should see n8n respond 📦 Requirements An OpenAI API key stored in n8n credentials A VAPI account with access to Function Tools A self-hosted or cloud n8n instance with webhook access 🎛 Customization Update the system prompt in the OpenAI Agent node to reflect your brand’s voice Swap GPT-4.1-mini for another OpenAI model if you need longer or cheaper responses Extend the workflow by connecting to CRMs, Slack, or databases 📬 Contact Need help customizing this (e.g., filtering by campaign, connecting to CRMs, or formatting reports)? 📧 rbreen@ynteractive.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Maxim Osipovs
This n8n workflow template implements a dual-path architecture for AI customer support, based on the principles outlined in the research paper "A Locally Executable AI System for Improving Preoperative Patient Communication: A Multi-Domain Clinical Evaluation" (Sato et al.). The system, named LENOHA (Low Energy, No Hallucination, Leave No One Behind Architecture), uses a high-precision classifier to differentiate between high-stakes queries and casual conversation. Queries matching a known FAQ are answered with a pre-approved, verbatim response, structurally eliminating hallucination risk. All other queries are routed to a standard generative LLM for conversational flexibility. This template provides a practical ++blueprint++ for building safer, more reliable, and cost-efficient AI agents, particularly in regulated or high-stakes domains where factual accuracy is critical. What This Template Does (Step-by-Step) Loads an expert-curated FAQ from Google Sheets and creates a searchable vector store from the questions during a one-time setup flow. Receives incoming user queries in real-time via a chat trigger. Classifies user intent by converting the query to an embedding and searching the vector store for the most semantically similar FAQ question. Routes the query down one of two paths based on a configurable similarity score threshold. Responds with a verbatim, pre-approved answer if a match is found (safe path), or generates a conversational reply via an LLM if no match is found (casual path). Important Note for Production Use This template uses an in-memory Simple Vector Store for demonstration purposes. For a production application, this should be replaced with a persistent vector database (e.g., Pinecone, Chroma, Weaviate, Supabase) to store your embeddings permanently. Required Integrations: Google Sheets (for the FAQ knowledge base) Hugging Face API (for creating embeddings) An LLM provider (e.g., OpenAI, Anthropic, Mistral) (Recommended) A persistent Vector Store integration. Best For: 🏦 Organizations in regulated industries (finance, healthcare) requiring high accuracy. 💰 Applications where reducing LLM operational costs is a priority. ⚙️ Technical support agents that must provide precise, unchanging information. 🔒 Systems where auditability and deterministic responses for known issues are required. Key Benefits: ✅ Structurally eliminates hallucination risk for known topics. ✅ Reduces reliance on expensive generative models for common queries. ✅ Ensures deterministic, accurate, and consistent answers for your FAQ. ✅ Provides high-speed classification via vector search. ✅ Implements a research-backed architecture for building safer AI systems.
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Oneclick AI Squad
This workflow automatically replies to new comments on your Instagram posts using smart AI. It checks your recent posts, finds unread comments, and skips spam or duplicates. The AI reads the post and comments to create a friendly, natural reply with emojis. It posts the reply instantly and logs everything so you can track engagement. Perfect for busy creators — stays active 24/7 without you lifting a finger! What It Monitors Recent Instagram Posts**: Fetches the latest posts based on your account activity. New Comments**: Detects unreplied comments in real time. Reply Eligibility**: Filters spam, duplicates, and already replied comments. AI-Generated Responses**: Creates personalized, engaging replies using post context. Features Runs on a schedule trigger (High traffic: 2–3 min | Medium: 5 min | Low: 10+ min). Fetches recent posts and their comments via Instagram Graph API. Context-aware AI replies** using post caption + comment content. Spam & duplicate filtering** to avoid unwanted or repeated replies. Tone-friendly & emoji-rich** responses for higher engagement. Logs every reply** with metadata (post ID, comment ID, timestamp). Workflow Steps | Node Name | Description | |---------|-----------| | Schedule Trigger | Triggers workflow based on traffic level (2–10 min intervals). | | Get Recent Posts | Fetches recent posts using Instagram Graph API. Returns post IDs needed to fetch comments. | | Split Posts | Splits batch of posts into individual items for parallel processing. | | Get Comments | For each post, retrieves comments with content, username, timestamp, like count. | | Split Comments | Splits comments into individual items for granular processing. | | Add Post Context | Combines comment + original post caption to generate relevant replies. | | Check if Replied | Checks if AI has already replied to this comment (prevents duplicate replies). | | Not Replied Yet? | Routes only unreplied comments forward. | | Spam Filter | Filters out spam using: • Spam keywords • Empty/one-word comments • Excessive emojis • Known spam patterns | | Should Reply? | Final logic gate: • If reply key exists → Skip • If spam → Skip • Else → Proceed | | Generate AI Reply | Uses OpenAI (or compatible LLM): • Input: Post caption + comment • Tone: Friendly & engaging • Max tokens: 150 • Temperature: 0.8 (creative) | | Post Reply | Posts AI-generated reply via Instagram API: • Method: POST • Body: message parameter • TTL: 30 days | | Mark As Replied | Updates internal tracking to prevent duplicate replies. | | Log Reply | Logs full reply details: • Post ID • Comment ID • Username • Reply text • Timestamp • Used for analytics & reporting | How to Use Copy the JSON configuration of the workflow. Import it into your n8n workspace. Configure Instagram Graph API credentials (Business/Creator Account required). Set up OpenAI API key in the Generate AI Reply node. Activate the workflow. Monitor replies in Instagram and execution logs in n8n. > The bot will only reply once per comment, skip spam, and use full post context for natural responses. Requirements n8n** account and self-hosted or cloud instance. Instagram Business or Creator Account** with Graph API access. Facebook App** with pages_read_engagement, pages_manage_comments permissions. OpenAI API key** (or compatible LLM endpoint). Valid access token with long-lived permissions. Customizing this Workflow Change Schedule Trigger interval based on post frequency (e.g., every 1 min for viral accounts). Update Spam Filter keywords list for brand-specific spam patterns. Modify Generate AI Reply prompt to match your brand voice (e.g., formal, humorous, Gen-Z). Adjust Temperature (0.5 = consistent, 1.0 = creative) and Max Tokens. Replace OpenAI with Claude, Gemini, or local LLM via HTTP request. Add approval step (manual review) before posting replies. Export logs to Google Sheets, Airtable, or database for analytics. Explore More AI Workflows: https://www.oneclickitsolution.com/contact-us/
by Automate With Marc
🤖 Telegram Image Editor with Nano Banana Send an image to your Telegram bot, and this workflow will automatically enhance it with Google’s Nano Banana (via Wavespeed API), then return the polished version back to the same chat—seamlessly. 👉 Watch step-by-step video tutorials of workflows like these on www.youtube.com/@automatewithmarc What it does Listens on Telegram for incoming photo messages Downloads the file sent by the user Uploads it to Google Drive (temporary storage for processing) Sends the image to Nano Banana API with a real-estate style cleanup + enhancement prompt Polls until the job is complete (handles async processing) Returns the edited image back to the same Telegram chat Perfect for Real-estate agents previewing polished property photos instantly Social media managers editing on-the-fly from Telegram Anyone who wants “send → cleaned → returned” image flow without manual edits Apps & Services Telegram Bot API (Trigger + Send/Receive files) Google Drive (Temporary file storage) Wavespeed / Google Nano Banana (AI-powered image editing) Setup Connect your Telegram Bot API token in n8n. Add your Wavespeed API key for Nano Banana. Link your Google Drive account (temporary storage). Deploy the workflow and send a test photo to your Telegram bot. Customization Adjust the Nano Banana prompt for different styles (e.g., ecommerce cleanup, portrait retouching, color correction). Replace Google Drive with another storage service if preferred. Add logging to Google Sheets or Airtable to track edits.
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o
by Easy8.ai
Automated Helpdesk Ticket Alerts to Microsoft Teams from Easy Redmine Intro/Overview This workflow automatically posts a Microsoft Teams message whenever a new helpdesk ticket is created in Easy Redmine. It’s perfect for IT teams who want real-time visibility into new issues without constantly checking ticket queues or inboxes. By integrating Easy Redmine with Teams, this setup ensures tickets are discussed and resolved faster, improving both response and resolution times. How it works Catch Easy Webhook – New Issue Created Triggers whenever Easy Redmine sends a webhook for a newly created ticket. Uses the webhook URL generated from Easy Redmine’s webhook settings. Get a new ticket by ID Fetches full ticket details (subject, priority, description) via the Easy Redmine API using the ticket ID from the webhook payload. Pick Description & Create URL to Issue Extracts the ticket description. Builds a direct link to the ticket in Easy Redmine for quick access. AI Agent – Description Processing Uses an AI model to summarize the ticket and suggest possible solutions based on the issue description. MS Teams Message to Support Channel Formats and sends the ticket details, priority, summary, and issue link into a designated Microsoft Teams channel. Uses the Teams message layout for clarity and quick scanning. How to Use Import the workflow into your n8n instance. Set up credentials: Easy Redmine API credentials with permission to read helpdesk tickets. Microsoft Teams credentials for posting messages to a channel. Configure Easy Redmine webhook To trigger on ticket creation events. Insert n8n webhook URL to your active Easy Redmine Webhook which can be created at https://easy-redmine-application.com/easy_web_hooks Adjust node settings: In the webhook node, use your Easy Redmine webhook URL. In the “Get a new ticket by ID” node, insert your API endpoint and authentication. In the Teams message node, select the correct Teams channel. Adjust timezone or scheduling if your team works across different time zones. Test the workflow by creating a sample ticket in Easy Redmine and confirming that it posts to Teams. Example Use Cases IT Helpdesk**: Notify the support team immediately when new issues are logged. Customer Support Teams**: Keep the entire team updated on urgent tickets in real time. Project Teams**: Ensure critical bug reports are shared instantly with the right stakeholders. Requirements Easy Redmine application Easy Redmine technical user for API calls with “read” permissions on tickets Microsoft Teams technical user for API calls with “post message” permissions Active n8n instance Customization Change the AI prompt to adjust how summaries and solutions are generated. Modify the Teams message format (e.g., bold priority, add emojis for urgency). Add filters so only high-priority or specific project tickets trigger notifications. Send alerts to multiple Teams channels based on ticket type or project. Workflow Improvement Suggestions: Rename nodes for clarity (e.g., “Fetch Ticket Details” instead of “get-one-issue”). Ensure no private ticket data is exposed beyond intended recipients. Add error handling for failed API calls to avoid missing ticket alerts.
by Trung Tran
AI-Powered AWS S3 Manager with Audit Logging in n8n (Slack/ChatOps Workflow) > This n8n workflow empowers users to manage AWS S3 buckets and files using natural language via Slack or chat platforms. Equipped with an OpenAI-powered Agent and integrated audit logging to Google Sheets, it supports operations like listing buckets, copying/deleting files, managing folders, and automatically records every action for compliance and traceability. 👥 Who’s it for This workflow is built for: DevOps engineers who want to manage AWS S3 using natural chat commands. Technical support teams interacting with AWS via Slack, Telegram, etc. Automation engineers building ChatOps tools. Organizations that require audit logs for every cloud operation. Users don’t need AWS Console or CLI access — just send a message like “Copy file from dev to prod”. ⚙️ How it works / What it does This workflow turns natural chat input into automated AWS S3 actions using an OpenAI-powered AI Agent in n8n. 🔁 Workflow Overview: Trigger: A user sends a message in Slack, Telegram, etc. AI Agent: Interprets the message Calls one of 6 S3 tools: ListBuckets ListObjects CopyObject DeleteObject ListFolders CreateFolder S3 Action: Performs the requested AWS S3 operation. Audit Log: Logs the tool call to Google Sheets using AddAuditLog: Includes timestamp, tool used, parameters, prompt, reasoning, and user info. 🛠️ How to set up Step-by-step Setup: Webhook Trigger Slack, Telegram, or custom chat platform → connects to n8n. OpenAI Agent Model: gpt-4 or gpt-3.5-turbo Memory: Simple Memory Node Prompt: Instructs agent to always follow tool calls with an AddAuditLog call. AWS S3 Nodes Configure each tool with AWS credentials. Tools: getAll: bucket getAll: file copy: file delete: file getAll: folder create: folder Google Sheets Node Sheet: AWS S3 Audit Logs Operation: Append or Update Row Columns (must match input keys): timestamp, tool, status, chat_prompt, parameters, user_name, tool_call_reasoning Agent Tool Definitions Include AddAuditLog as a 7th tool. Agent calls it immediately after every S3 action (except when logging itself). ✅ Requirements [ ] n8n instance with AI Agent feature [ ] OpenAI API Key [ ] AWS IAM user with S3 access [ ] Google Sheet with required columns [ ] Chat integration (Slack, Telegram, etc.) 🧩 How to customize the workflow | Feature | Customization Tip | |----------------------|--------------------------------------------------------------| | 🌎 Multi-region S3 | Let users include region in the message or agent memory | | 🛡️ Restricted actions| Use memory/user ID to limit delete/copy actions | | 📁 Folder filtering | Extend ListObjects with prefix/suffix filters | | 📤 Upload file | Add PutObject with pre-signed URL support | | 🧾 Extra logging | Add IP, latency, error trace to audit logs | | 📊 Reporting | Link Google Sheet to Looker Studio for audit dashboards | | 🚨 Security alerts | Notify via Slack/Email when DeleteObject is triggered |
by Evoort Solutions
📥 Instagram to MP4 Converter with Google Drive Integration This n8n workflow enables users to convert Instagram video links into downloadable MP4 files, store them in Google Drive, and log the results (success or failure) in Google Sheets. 🔧 Node-by-Node Overview On form submission – Triggers when a user submits an Instagram video URL. Instagram Downloader API Request – Calls the Instagram Downloader API to retrieve a downloadable link for the video. If – Checks if the API response indicates success. MP4 Downloader – Downloads the video from the provided media URL. Upload To Google Drive – Uploads the MP4 video to a specified folder in Google Drive. Google Drive Set Permission – Sets the uploaded file to public with a sharable link. Google Sheets – Logs successful conversions, including the original URL and Drive link. Wait – Adds a pause before logging failure to avoid rapid writes to Google Sheets. Google Sheets Append Row – Logs failed attempts with Drive_URL marked as N/A. 🚀 Key Features 🔗 Uses the Instagram Downloader API to convert Instagram video URLs 🗂 Uploads MP4s directly to Google Drive 📊 Logs all actions in Google Sheets 🧠 Smart error handling using conditional and wait nodes 📌 Use Case & Benefits Convert Instagram videos to MP4 instantly from a simple form submission Automatically upload videos to Google Drive Log successful and failed conversions into Google Sheets Ideal for marketers, content managers, educators, and archivists No manual downloading, renaming, or organizing — it's fully automated 🌐 API Key Requirement To use this workflow, you’ll need an API key from the Instagram Downloader API. Follow these steps to obtain your API key: Go to the Instagram Downloader API Sign up or log in to RapidAPI Subscribe to a plan (either free or paid) Copy your x-rapidapi-key and paste it in the HTTP Request node where required 🛠 Full Setup Instructions 1. API Setup Create an account with RapidAPI. Subscribe to the Instagram Downloader API and copy your API key. Use this key in the HTTP Request node in n8n to call the Instagram Downloader API. 2. Google Services Setup Google Drive Integration: Go to the Google Developer Console. Create a new project. Enable the Google Drive API. Create OAuth 2.0 credentials and download the JSON credentials file. Upload this file to n8n under your Google Drive credentials setup. Google Sheets Integration: Enable the Google Sheets API in the Google Developer Console. Create OAuth 2.0 credentials for Sheets access. Download the credentials file and upload it to n8n for authentication. Make sure the Google Sheet you're using has columns for Original_URL, Drive_URL, and Status. 3. Customizing the Template Custom Folder for Google Drive: In the "Upload To Google Drive" node, change the folder ID to match your desired folder in Google Drive where videos should be stored. Custom Google Sheets Columns: By default, the template logs the Original_URL, Drive_URL, and Status (success/failure). To add more columns, simply update the "Google Sheets Append Row" node with new column headers and ensure the data from each step corresponds correctly. 4. Column Mapping for Google Sheets The default columns in your Google Sheet are: Original_URL: The original Instagram video URL submitted by the user. Drive_URL: The sharable link to the uploaded MP4 in Google Drive. Status: Whether the conversion was successful or failed. Important Note: Ensure your Google Sheet is properly formatted with these columns before running the workflow. 💡 Additional Tips Monitoring API Usage**: The Instagram Downloader API has rate limits. Check your API usage in the RapidAPI dashboard. Automating with Triggers**: You can trigger the workflow automatically when a user submits a form URL through tools like Google Forms or external services that integrate with n8n. Error Handling**: If you encounter frequent failures, check the API's response format and ensure that all your credentials are correctly set up.
by Amuratech
This template is designed for SEO specialists, content marketers, and digital growth teams who want to automate the process of tracking keyword rankings. Manually checking SERPs every week is time-consuming and prone to error. This workflow solves that by automatically querying Google search results with the Serper API, updating rankings in Google Sheets, and keeping historical data for up to 12 weeks. Prerequisites Before you begin, make sure you have: A Google Sheet with columns: Sr.no (unique row identifier) Keyword Target Page (the URL you want to track) A Google Service Account credential set up in n8n A Serper API key (added to n8n credentials as serperApiKey) Detailed Setup Import the workflow into n8n. Update the Google Sheets nodes: Replace your-google-sheet-id with your actual Google Sheet ID Replace your-sheet-name with the correct tab name Add your Google Service Account credentials to the Google Sheets nodes. Add your Serper API key to the HTTP Request node (serperApiKey). (Optional) Update the HARDCODED_DOMAIN variable in the Code node if you want to lock rankings to a specific domain. Run the workflow once manually to confirm everything is working. Usage & Customization By default, the workflow runs every Monday at 00:00 (midnight). You can adjust this by editing the Cron node. The workflow stores ranking history for 12 weeks. If you want more, simply extend the columns in your Google Sheet and update the Code node logic. The workflow checks for both exact URLs and domains. You can customize this in the Code node depending on whether you want to track page-level or domain-level rankings. Data is updated only for the current week unless you allow overwriting, ensuring historical accuracy.
by Open Paws
📌 Who’s it for This template is designed for campaigners, researchers, and organizers who need to enrich spreadsheets of contacts with publicly available social media profiles. Ideal for advocacy campaigns, outreach, or digital organizing where fast, scalable people lookup is needed. ⚙️ What it does This workflow scans a Google Sheet for rows marked as unanalysed ("Analysed?" = false), sends each contact to a dedicated AI-powered research agent, and returns structured public profile links across major platforms like: Twitter/X LinkedIn Facebook Instagram GitHub TikTok YouTube Reddit Threads Medium Substack And more (18+ total) It processes one contact per run for clarity and stability, appending the results back to the original Google Sheet. 🛠️ How to set it up Copy the Google Sheet template → This sheet includes sample columns and headers for contacts and social profile fields. Paste your contact list at the end of the sheet. For each new contact, make sure the "Analysed?" column is set to false. Clone this workflow and the AI Research Agent subworkflow. Connect your Google Sheets account in n8n. Update the workflow with your sheet ID and sheet name (Sheet1 by default). Trigger the workflow on a schedule (e.g. every 15 minutes) or run it manually. ✅ Requirements Google Sheets integration** set up in n8n Access to this AI research subworkflow OpenRouter API key n8n (self-hosted or cloud) 🧩 How to customize the workflow Modify the research agent to prioritize specific platforms or return only verified profiles. Add more profile columns to the Google Sheet and schema to match your custom fields. Add logic to send alerts (email, Slack, etc.) for specific contacts. Use an n8n webhook instead of a schedule to run the process on demand. Use a loop over all items to process all rows sequentially (only recommended for small datasets due to memory constraints)