by Intuz
This n8n template from Intuz provides a complete and automated solution to transform your team's inbox management. It acts as an intelligent agent that reads incoming Gmail messages, uses AI to determine their category, and automatically routes them to the correct Slack channel—even creating new channels on the fly for new topics. Who's this workflow for? Customer Support Teams Sales & Lead Management Teams Operations & Project Management Teams Any team that uses Slack as a central hub for communication and triaging tasks. How it works 1. Monitor New Emails: The workflow continuously checks a specified Gmail account for new, unread emails. It automatically filters out spam, drafts, and duplicates. 2. AI Categorization: Each new email's subject and body are sent to an AI model (like Llama 3 via OpenRouter). The AI analyzes the content and assigns a category based on a predefined list (e.g., sales, marketing, accounts, internal). 3. Find or Create Slack Channel: The workflow then checks your Slack workspace to see if a channel corresponding to the AI's category already exists (e.g., #sales). 4. Route the Email: If the channel exists: The workflow posts a formatted summary of the email, a link to the original message in Gmail, and a "Reply" button directly into the existing channel. If the channel does NOT exist: The workflow automatically creates a new public channel (e.g., #new-category), invites a designated user, and then posts the email summary. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Gmail Account: An active Gmail account with API access enabled. 3. Slack Workspace & App: A Slack workspace where you have permission to install apps. A Slack App with a Bot Token that has the following scopes: channels:read, channels:manage, chat:write, groups:write, and users:read. 4. OpenRouter Account: An account with OpenRouter to access various AI models like Llama 3. You will need an API key. Setup Instructions 1. Gmail Configuration: In the "Capture Gmail Event" (Gmail Trigger) node, connect your Gmail account using OAuth2 credentials. 2. OpenRouter AI Configuration: In the "OpenRouter Chat Model" node, create a new credential and add your OpenRouter API key. 3. Slack Configuration: Create a Slack App: Go to api.slack.com/apps, create a new app, and install it to your workspace. Set Permissions: In your app's "OAuth & Permissions" settings, add the following Bot Token Scopes: channels:read, channels:manage, chat:write, groups:write, users:read. Reinstall the app to your workspace after adding them. Get Bot Token: Copy the "Bot User OAuth Token" (it starts with xoxb-). Connect in n8n: In all Slack nodes in the workflow, create a new credential and paste this Bot Token. Set User to Invite: In the "Invite a user to a channel" node, replace the placeholder User ID (U0A6ULM7CGK) with the Slack Member ID of the user you want to be automatically invited to new channels. 4. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. Your intelligent email routing system is now live! Support If you need help setting up this workflow or require a custom version tailored to your specific use case, please feel free to reach out to the template author: Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Incrementors
Description A natural conversational AI chatbot that collects lead information (Name, Phone, Email, Message) one question at a time without feeling like a form. Uses session-based memory to track conversations, intelligently asks only for missing details, and saves complete leads to Google Sheets automatically. What this workflow does This workflow creates a human-like booking assistant that gathers lead information through natural conversation instead of traditional forms. The AI chatbot asks ONE question at a time, remembers previous answers using session memory, never repeats questions, and only saves data to Google Sheets when all four required fields (Name, Phone Number, Email Address, User Message) are confidently collected. The conversation feels natural and friendly—users engage with the bot as if chatting with a real person, dramatically improving completion rates compared to static forms. Perfect for booking systems, consultation requests, event registrations, customer support intake, or any scenario where you need to collect contact information without friction. Key features One question at a time: The AI never overwhelms users with multiple questions. It asks for Name, then Phone, then Email, then Message—sequentially and naturally, based on what's still missing from the conversation. Session-based memory: Uses timestamp-based session tracking so the AI remembers the entire conversation context. If a user says "My name is John" in message 1, the AI won't ask for the name again in message 5. Smart field detection: The AI automatically detects which details have been collected and which are still missing. It adapts the conversation flow dynamically instead of following a rigid script. Natural language processing: Handles variations in user input ("John Doe", "I'm John", "Call me John") and validates data intelligently before saving. Complete data guarantee: Only writes to Google Sheets when all 4 required fields are present. No partial or incomplete leads clutter your tracking sheet. Webhook-based integration: Works with any website, app, or platform that can send HTTP requests. Integrate with chatbots, contact forms, booking widgets, or custom applications. Instant responses: Real-time conversation with sub-second response times. Users get immediate replies, maintaining engagement throughout the lead collection process. How it works 1. User initiates conversation via webhook A user sends a message through your website chat widget, contact form, or booking interface. This triggers a webhook that passes the message along with query parameters (name, email, phone, message, timestamp, source) to n8n. 2. AI Agent analyzes conversation state The Conversational Lead Collection Agent receives the user's message and checks the current state: Which fields are already collected (from previous messages in this session)? Which fields are still missing? What should be asked next? The AI uses the system prompt to understand its role as a booking assistant for "Spark Writers' Retreat" and follows strict conversation rules. 3. Session memory tracks context The Buffer Window Memory node uses the timestamp from the webhook as a unique session ID. This allows the AI to: Remember all previous messages in this conversation Access previously collected information (name, phone, email) Never ask the same question twice Maintain conversation continuity even if the user takes breaks 4. One question at a time Based on what's missing, the AI asks exactly ONE question in natural, friendly language: If Name is missing → "Hi! What's your name?" If Phone is missing → "Great! And what's your phone number?" If Email is missing → "Perfect! Could you share your email address?" If Message is missing → "Thanks! How can I help you today?" The AI adapts its language based on previous conversation flow—it doesn't sound robotic or repetitive. 5. Data validation and collection As the user responds, the AI: Validates input (checks if phone number looks valid, email has @ symbol, etc.) Extracts the information from natural language responses Stores it temporarily in session memory Continues asking until all 4 fields are complete If the user provides unclear input, the AI politely asks again: "I didn't quite catch that. Could you share your phone number?" 6. Save to Google Sheets (when complete) Critical rule: The AI only uses the Google Sheets tool AFTER all four details are confidently collected. This prevents partial or incomplete leads from cluttering your database. When all fields are present, the AI: Writes exactly ONE row to Google Sheets Maps data: Name → Name, Phone → Phone No., Email → Email, Message → Message Uses Timestamp as the unique identifier (matching column) Updates existing rows if the same timestamp appears again (prevents duplicates) 7. Confirmation message After successfully saving, the AI sends a polite thank you: "Thank you! 🙏 We've received your details and our team will get back to you shortly." The AI never mentions Google Sheets, tools, backend systems, or automation—it maintains the illusion of human conversation. 8. Response delivery The final AI response is sent back to the user via the webhook response. Your website or app displays this message in the chat interface, completing the conversation loop. Setup requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for lead storage OpenAI API key (GPT-4.1-mini access) Website or app with chat interface (or any platform that can send webhooks) Estimated setup time: 15–20 minutes Configuration steps 1. Connect Google Sheets In n8n: Credentials → Add credential → Google Sheets OAuth2 API Complete OAuth authentication Create a Google Sheet for lead tracking with these columns: Timestamp (unique session identifier) Name Phone No. Email Message Open "Save Lead to Google Sheets" node Select your Google Sheet and the correct sheet tab Verify column mapping matches your sheet structure 2. Add OpenAI API credentials Get API key: https://platform.openai.com/api-keys In n8n: Credentials → Add credential → OpenAI API Paste your API key Open "OpenAI GPT-4.1 Mini Language Model" node Select your OpenAI credential Ensure model is set to gpt-4.1-mini 3. Copy webhook URL Open "Receive User Message via Webhook" node Copy the Webhook URL (format: https://your-n8n.cloud/webhook/[webhook-id]) This is the endpoint your website or app will send messages to 4. Integrate with your chat interface You need to send HTTP POST/GET requests to the webhook URL with these query parameters: GET https://your-n8n.cloud/webhook/[id]?name=[name]&email=[email]&phone=[phone]&message=[user_message]×tamp=[unique_timestamp]&source=[source] Query parameter details: name: User's name (empty string if not yet collected) email: User's email (empty string if not yet collected) phone: User's phone number (empty string if not yet collected) message: Current user message (required) timestamp: Unique session ID (use ISO timestamp or UUID) source: Source identifier (e.g., "website_chat", "booking_form") Example integration (JavaScript): const sessionId = new Date().toISOString(); const userMessage = "Hi, I want to book a retreat"; fetch(https://your-n8n.cloud/webhook/[id]?message=${encodeURIComponent(userMessage)}×tamp=${sessionId}&name=&email=&phone=&source=website_chat) .then(res => res.json()) .then(data => { // Display AI response in your chat UI console.log(data.output); }); 5. Customize the AI assistant Open "Conversational Lead Collection Agent" node and edit the system message to: Change the business name (currently "Spark Writers' Retreat") Modify conversation tone (formal vs. casual) Adjust the fields being collected Change the final thank you message 6. Test the workflow Activate the workflow (toggle to Active at the top) Send a test message to the webhook URL Verify the AI responds appropriately Continue the conversation by sending follow-up messages with the same timestamp Check that: AI asks for missing fields only Session memory persists across messages Lead saves to Google Sheets when all 4 fields are collected Thank you message appears after saving Use cases Booking and reservations: Hotels, retreat centers, event venues, or appointment-based businesses collect guest details conversationally instead of long booking forms. Higher completion rates mean more confirmed bookings. Lead generation for services: Agencies, consultants, coaches, or freelancers capture qualified leads through natural conversation. Users are more likely to complete the process when it feels like chatting instead of form-filling. Customer support intake: Support teams collect issue details, contact information, and problem descriptions through chat before routing to the right agent. All data automatically logged in Google Sheets for ticketing. Event registration: Conference organizers, workshop hosts, or webinar providers gather attendee information without friction. The conversational approach encourages sign-ups even from mobile users who hate forms. Sales qualification: Sales teams use the chatbot to qualify leads by collecting basic information and understanding requirements before human handoff. Complete context stored in Google Sheets for CRM integration. Consultation requests: Professional services (legal, medical, financial) collect client details and initial consultation requests through friendly conversation, reducing no-show rates by building rapport early. Customization options Change collected fields Open "Conversational Lead Collection Agent" node and modify the system message: Add new fields (e.g., Company Name, Budget, Preferred Date) Remove optional fields (e.g., make Message optional) Update the field names and data mapping Then update the Google Sheets node to include the new columns. Adjust conversation tone In the system message, change conversation style: Formal:** "May I please have your full name?" Casual:** "What's your name?" Friendly:** "Hey! What should I call you?" Add validation rules Enhance the system prompt with specific validation: Phone format (e.g., 10 digits, US format) Email domain restrictions (e.g., only business emails) Name length requirements Message minimum word count Connect to CRM or email After "Save Lead to Google Sheets" node, add: HTTP Request node** to send data to your CRM API Email node** to notify sales team of new leads Slack/Discord node** for real-time team alerts Webhook node** to trigger other workflows Multi-language support Modify the system prompt to respond in the user's language: Add language detection logic Translate questions and responses Update thank you message for each language Add conversation analytics Insert a Set node before saving to track: Number of messages per lead Time to completion Drop-off points Source performance Troubleshooting AI repeats questions already answered Memory not persisting:* Verify the *"Session Memory with Timestamp"** node is using the correct timestamp from the webhook query params. Timestamp changing:** Ensure your chat interface sends the SAME timestamp for all messages in one conversation. Generate it once and reuse it. Memory window size:** Increase the buffer window size in the memory node if conversations are very long. Leads not saving to Google Sheets Partial data:** The AI only saves when all 4 fields are collected. Check your test conversation actually provided all required information. OAuth expired:** Re-authenticate Google Sheets credentials. Sheet permissions:** Verify the connected Google account has edit access to the sheet. Column names mismatch:** Ensure sheet column names exactly match the mapping in the Google Sheets node (case-sensitive). AI saves incomplete data System prompt not followed:** Review the "Tool usage (VERY IMPORTANT)" section in the system message. Ensure it clearly states to only use Google Sheets after all fields are collected. Validation too lenient:** The AI might be guessing missing fields. Strengthen validation rules in the system prompt. Webhook not receiving messages URL incorrect:** Double-check the webhook URL in your integration code matches the n8n webhook URL exactly. CORS issues:** If calling from a browser, ensure n8n allows cross-origin requests or use server-side integration. Query params missing:** Verify all required parameters (message, timestamp) are included in the request. AI responses too slow OpenAI API latency:** GPT-4.1-mini typically responds in 1-3 seconds. If slower, check OpenAI API status. Network delays:** Verify n8n instance has good connectivity. Memory lookup slow:** Reduce buffer window size if storing hundreds of messages. Session memory not working Timestamp format inconsistent:** Use ISO format (e.g., 2026-01-28T14:38:23.720Z) and ensure it's identical across messages. Memory node misconfigured:* Check the session key expression in *"Session Memory with Timestamp"** node references the correct webhook query param. Resources n8n documentation OpenAI GPT-4 API Google Sheets API n8n Webhook node n8n AI Agent Buffer Window Memory Support Need help or custom development? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by vinci-king-01
Subscription Renewal Reminder – Telegram & Supabase This workflow tracks upcoming subscription expiry dates stored in Supabase and automatically sends personalized renewal-reminder messages to each customer via Telegram. It is designed to be triggered by an HTTP Webhook (manually or on a schedule) and ensures that customers are notified a configurable number of days before their subscription lapses. > Community Template Disclaimer > This is a community-contributed n8n workflow template. It is provided “as-is” without official support from n8n GmbH. Always test thoroughly before using in production. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) Supabase project with a subscriptions table (id, customer_name, expiration_date, telegram_chat_id, notified) A Telegram Bot created via @BotFather Outbound HTTPS access from n8n to api.telegram.org and your Supabase project REST endpoint Required Credentials Supabase Service Role Key** – Full access for reading/writing the subscriptions table Telegram Bot Token** – To send messages from your bot n8n Webhook URL** – Auto-generated when you activate the workflow (ScrapeGraphAI API Key is *not* required for this non-scraping workflow.) Specific Setup Requirements | Environment Variable | Example Value | Purpose | |----------------------|--------------|---------| | SUPABASE_URL | https://xyzcompany.supabase.co | Base URL for Supabase REST API | | SUPABASE_KEY | eyJhbGciOiJI... | Service Role Key | | TELEGRAM_TOKEN | 609012345:AA... | Bot token obtained from BotFather | | REMINDER_DAYS | 3 | Days before expiry to notify | How it works This workflow tracks upcoming subscription expiry dates stored in Supabase and automatically sends personalized renewal-reminder messages to each customer via Telegram. It is triggered by an HTTP Webhook (manually or via external scheduler) and ensures that customers are notified a configurable number of days before their subscription lapses. Key Steps: Receive Trigger (Webhook)**: External call fires the workflow or an internal Cron node can be added. Set Static Parameters**: The Set node calculates “today + REMINDER_DAYS”. Query Supabase**: Fetch all subscriptions expiring on or before the calculated date and not yet notified. Branch Logic (If node)**: Check if any subscriptions were returned. Loop & Dispatch (Code + Telegram nodes)**: Iterate over each customer row, compose a message, and send via Telegram. Flag as Notified (Supabase Update)**: Update each processed row to prevent duplicate reminders. Respond to Webhook**: Return a concise JSON summary for logging or downstream integrations. Set up steps Setup Time: 15–20 minutes Create Telegram Bot a. Open Telegram and talk to @BotFather → /newbot b. Copy the given bot token; paste it into n8n Telegram credentials. Prepare Supabase a. Create a table named subscriptions with columns: id (uuid), customer_name (text), expiration_date (date), telegram_chat_id (text), notified (bool, default false) b. Obtain the Service Role Key from Project Settings → API. Import the Workflow a. In n8n, click Templates → Import and select “Subscription Renewal Reminder – Telegram & Supabase”. b. Replace placeholder credentials in the Supabase and Telegram nodes. Define Environment Variables (Optional but recommended) Add SUPABASE_URL, SUPABASE_KEY, TELEGRAM_TOKEN, and REMINDER_DAYS in Settings → Environment Variables for easy maintenance. Activate the Workflow Copy the production webhook URL and (optionally) set up a cron job or n8n Cron node to hit it daily. Node Descriptions Core Workflow Nodes: Webhook** – Entry point; triggers the workflow via HTTP request. Set (Calculate Target Date)** – Defines targetDate = today + REMINDER_DAYS. Supabase (Select)** – Retrieves expiring subscriptions that haven’t been notified. If (Rows > 0?)** – Determines whether to continue or exit early. Code (For-Each Loop)** – Iterates through each returned row to send messages and update status. Telegram** – Sends a personalized renewal reminder to the customer’s chat. Supabase (Update)** – Flags the subscription row as notified = true. Respond to Webhook** – Returns a JSON summary with counts of sent messages. Sticky Notes** – Inline documentation for maintainers (non-executable). Data Flow: Webhook → Set → Supabase (Select) → If → Code → Telegram → Supabase (Update) → Respond to Webhook Customization Examples Send Slack Notifications Instead of Telegram // Replace Telegram node with Slack node const message = Hi ${item.customer_name}, your subscription expires on ${item.expiration_date}.; return [{ text: message, channel: item.slack_channel_id }]; Notify 7 Days & 1 Day Before Expiry // In Set node items[0].json.reminderOffsets = [7, 1]; // days return items; Data Output Format The workflow outputs structured JSON data: { "totalSubscriptionsChecked": 42, "remindersSent": 13, "timestamp": "2024-05-27T09:15:22.000Z" } Troubleshooting Common Issues No messages sent – Check the If node; ensure REMINDER_DAYS is set correctly and the Supabase query returns rows. Telegram error 403 – The user hasn’t started a chat with your bot. Ask the customer to click “Start” in Telegram. Performance Tips Batch database updates instead of row-by-row when dealing with thousands of records. Cache Supabase responses if you expect multiple workflows to query the same data within seconds. Pro Tips: Use the Cron node inside n8n instead of external schedulers for a fully self-contained setup. Add an Email node after the Telegram node for multi-channel reminders. Store template messages in Supabase so non-developers can update wording without editing the workflow.
by Lucio
Instagram Video Backup to Google Drive Automatically backup all your Instagram videos to Google Drive with a searchable metadata catalog in JSON format. What It Does This workflow provides a complete backup solution for your Instagram video content with intelligent caption parsing: Fetches your Instagram account ID and videos (VIDEO and REELS types) Parses captions into structured fields: Title: Everything before the first hashtag Description: Everything after the first hashtag (includes all tags) Tag List: All hashtags extracted as an array Description Full: Complete original caption text Downloads videos in maximum available quality from Instagram Uploads videos to a designated Google Drive folder Creates/updates a JSON metadata file with all video details Prevents duplicates using n8n Data Tables with account-level filtering Key Features Account-Level Tracking: The Data Table includes accountId so you can use the same table across multiple Instagram accounts. Each account's videos are tracked separately. Smart Caption Parsing: Automatically splits Instagram captions into title (before first #) and description (all hashtags and text after), with full text preserved in descriptionFull. Portable Catalog: The JSON file is stored in Google Drive alongside your videos, making it accessible anywhere without needing n8n. Maximum Quality: Uses Instagram Graph API's media_url field for highest available quality. Hashtag Extraction: Automatically extracts all hashtags into an array for easy filtering and analysis. Workflow Architecture Section 1: Fetch & Filter Get Instagram Account Info → Configuration → Fetch Media → Split Out Items → Filter Videos Only Get Instagram Account Info**: Fetches your Instagram account ID and username Configuration**: Stores account ID, Google Drive folder ID, and settings Fetch Media**: Gets up to 100 media items from Instagram Split Out Items**: Separates each media item for individual processing Filter Videos Only**: Keeps only VIDEO and REELS types (skips images) Section 2: Process Videos Check If Backed Up → IF Not Backed Up → Wait → Parse Caption → Download → Upload → Extract Metadata → Save Record → Aggregate For each video: Check If Already Backed Up: Queries Data Table by postId to avoid duplicates IF Not Already Backed Up: Skips if video already exists Wait: 5-second delay between downloads (prevents API rate limits) Parse Caption: Splits caption into title, description, tagList, descriptionFull Download Video: Downloads video file from Instagram to memory Upload to Google Drive: Uploads video to configured folder Extract Metadata: Creates structured metadata object with all fields Save Backup Record: Stores accountId, postId, googleDriveFileId, backedUpAt in Data Table Aggregate: Collects all new video metadata for JSON update Section 3: Update JSON Catalog End Loop → Download Existing JSON → Update JSON → Upload Updated JSON After all videos processed: Download Existing Metadata JSON: Gets current JSON file from Google Drive (if exists) Update Metadata JSON: Appends new video metadata to existing catalog Upload Updated Metadata JSON: Saves updated JSON back to Google Drive Setup Steps 1. Create Google Drive Folder Go to Google Drive Create a new folder named Instagram Video Backups (or any name you prefer) Open the folder and copy the Folder ID from the URL: https://drive.google.com/drive/folders/1ABC123xyz... ^^^^^^^^^^^ This is your Folder ID 2. Create n8n Data Table Create a Data Table for deduplication tracking with account-level support: Table Name: Instagram Video Backups Schema: | Field Name | Type | Description | |------------|------|-------------| | accountId | string | Instagram account ID (allows multi-account use) | | postId | string (Primary Key) | Instagram post ID | | googleDriveFileId | string | Google Drive file ID for the video | | backedUpAt | string | ISO timestamp of backup | Why accountId? This allows you to use the same Data Table for multiple Instagram accounts. Each account's videos are tracked separately, preventing conflicts. 3. Configure Credentials You'll need two credential sets: Instagram Graph API (HTTP Bearer Auth) In n8n, create new credential: HTTP Bearer Auth Set header name: Authorization Set header value: Bearer YOUR_INSTAGRAM_ACCESS_TOKEN Name it: Instagram Graph API Getting Instagram Access Token: Follow Meta's Business Account setup guide Required permissions: instagram_graph_user_media Tokens expire after 60 days (requires manual refresh) Google Drive OAuth2 In n8n, create new credential: Google Drive OAuth2 API Follow OAuth flow to authorize your Google account Name it: Google Drive Account 4. Update Configuration Node In the workflow, open the Configuration node and update: { "googleDriveFolderId": "PASTE_YOUR_FOLDER_ID_HERE", "maxVideosPerRun": 100, "waitBetweenDownloads": 5, "metadataFileName": "instagram-backup-metadata.json" } Settings Explained: googleDriveFolderId: The folder ID you copied in step 1 maxVideosPerRun: Max videos to process per run (100 is safe for API limits) waitBetweenDownloads: Seconds to wait between downloads (prevents rate limits) metadataFileName: Name of the JSON catalog file in Google Drive Note: accountId and accountUsername are automatically populated from Instagram API. 5. Test & Activate Click Manual Trigger to test the workflow Check Google Drive folder for: Video files named instagram_{postId}.mp4 JSON file named instagram-backup-metadata.json Verify Data Table has records with accountId and postId Activate the Schedule Trigger for daily automatic backups Metadata JSON Structure The JSON file stored in Google Drive has this structure: { "lastUpdated": "2026-02-01T10:00:00Z", "totalVideos": 42, "videos": [ { "accountId": "17841400123456789", "instagramId": "123456789", "permalink": "https://instagram.com/p/ABC123", "title": "Amazing sunset at the beach!", "description": "#travel #nature #sunset", "tagList": ["travel", "nature", "sunset"], "descriptionFull": "Amazing sunset at the beach! #travel #nature #sunset", "timestamp": "2026-01-15T08:30:00Z", "mediaType": "VIDEO", "googleDriveFileId": "1ABC123xyz...", "googleDriveFileName": "instagram_123456789.mp4", "backedUpAt": "2026-02-01T10:00:00Z" } ] } Field Descriptions accountId**: Instagram account ID (from Graph API /me endpoint) instagramId**: Instagram post ID (unique identifier) permalink**: Direct link to Instagram post title**: Caption text before the first hashtag description**: Caption text from first hashtag onward (includes all tags) tagList**: Array of hashtags without the # symbol descriptionFull**: Complete original caption (preserves full text) timestamp**: When the video was originally posted to Instagram mediaType**: VIDEO or REELS googleDriveFileId**: Google Drive file ID (use to access file via Drive API) googleDriveFileName**: Filename in Google Drive (instagram_{postId}.mp4) backedUpAt**: When the video was backed up (ISO timestamp) Caption Parsing Logic The Parse Caption Code node splits Instagram captions intelligently: Example Caption: "Amazing sunset at the beach! 🌅 #travel #nature #sunset" Parsed Fields: title**: "Amazing sunset at the beach! 🌅" description**: "#travel #nature #sunset" tagList**: ["travel", "nature", "sunset"] descriptionFull**: "Amazing sunset at the beach! 🌅 #travel #nature #sunset" Edge Cases: No hashtags**: Entire caption becomes title, description is empty Hashtag at start**: title is empty, entire caption becomes description Multiple lines**: Preserves all line breaks in descriptionFull Multi-Account Usage Using the same Data Table for multiple accounts: Import this workflow multiple times (once per Instagram account) Configure each workflow with different Instagram credentials Use the same Data Table name in all workflows: Instagram Video Backups Each workflow automatically filters by its own accountId Benefits: Single deduplication table for all accounts Easy to query all backups across accounts Prevents conflicts between accounts with same post IDs Querying specific account backups: // In Data Table or external script const accountBackups = allBackups.filter( backup => backup.accountId === "17841400123456789" ); API Quotas & Limits Instagram Graph API Rate Limits**: 200 calls/hour per user token (standard) This Workflow**: 2 calls total (1 for account info, 1 for media fetch) Impact**: Can run safely within free tier limits Google Drive API Rate Limits**: 1,000 requests per 100 seconds per user This Workflow**: 2 calls per video (upload video + final JSON update) Impact**: 100 videos = ~200 calls, well within limits Recommended Schedule Daily (midnight)**: Default, safe for most accounts Weekly**: Good for accounts with infrequent posting Manual**: On-demand backups when needed Troubleshooting No videos are being backed up Check Instagram credentials: Open "Get Instagram Account Info" node Click "Execute Node" Look for error messages about authentication Verify account has videos: Instagram Graph API only returns VIDEO and REELS Won't backup images or carousels (by design) accountId is empty in Data Table Account info fetch failed: Check Instagram credentials have correct permissions Verify token hasn't expired (60-day limit) Test "Get Instagram Account Info" node separately JSON file has wrong title/description Caption parsing issue: Open "Parse Caption" Code node Check the output to see parsed fields Verify caption has hashtags (if no hashtags, entire caption becomes title) Custom parsing logic: Edit the "Parse Caption" Code node to adjust splitting logic: // Current: splits at FIRST hashtag const firstHashtagIndex = caption.indexOf('#'); // Alternative: split at specific word const splitWord = 'DESCRIPTION:'; const splitIndex = caption.indexOf(splitWord); Duplicate videos in Google Drive Data Table issues: Verify table name is exactly: Instagram Video Backups Check table has postId as primary key Verify accountId field exists Workflow execution failed mid-run: If workflow fails after upload but before saving to Data Table, video won't be tracked Safe to delete duplicate video in Google Drive and re-run Rate limit errors Instagram rate limits: Reduce maxVideosPerRun to 50 or 25 Increase waitBetweenDownloads to 10 seconds Google Drive rate limits: Unlikely with default settings If occurs, reduce maxVideosPerRun Caption has special characters (emojis, line breaks) Emojis preserved: All emojis are preserved in descriptionFull May appear in title or description depending on position Line breaks: Line breaks are preserved in descriptionFull May affect title/description split if hashtags are on new lines Advanced Customization Change Backup Folder Update googleDriveFolderId in Configuration node to any Google Drive folder ID. Change Schedule Edit the Schedule Trigger node: Daily midnight: 0 0 * * * (default) Every 12 hours: 0 */12 * * * Weekly Sunday: 0 0 * * 0 Custom: Use crontab.guru to generate expression Organize Videos by Date To create monthly subfolders (e.g., 2026-02/video.mp4): Before "Upload to Google Drive" node, add "Google Drive - Create Folder" node Folder name: ={{ $now.format('yyyy-MM') }} Parent folder: ={{ $('Configuration').item.json.googleDriveFolderId }} Update upload node to use created folder ID Download Videos Locally Too To keep local copies in addition to Google Drive: After "Download Video" node, add Write Binary File node File path: /path/to/backup/{{ $('Extract Metadata').item.json.googleDriveFileName }} Connect in parallel with "Upload to Google Drive" Custom Caption Parsing To use different title/description split logic: Option 1: Split at specific keyword const splitKeyword = 'DESCRIPTION:'; const splitIndex = caption.indexOf(splitKeyword); if (splitIndex === -1) { title = caption.trim(); description = ''; } else { title = caption.substring(0, splitIndex).trim(); description = caption.substring(splitIndex + splitKeyword.length).trim(); } Option 2: Use first sentence as title const sentenceEnd = caption.match(/[.!?]/); const endIndex = sentenceEnd ? caption.indexOf(sentenceEnd[0]) + 1 : -1; if (endIndex === -1) { title = caption.trim(); description = ''; } else { title = caption.substring(0, endIndex).trim(); description = caption.substring(endIndex).trim(); } Filter by Account in JSON To create separate JSON files per account: Update "Update Metadata JSON" Code node to filter by accountId Change metadataFileName to include account username: instagram-backup-{{ $('Configuration').item.json.accountUsername }}.json Use Cases Search Videos by Hashtag Download the JSON file from Google Drive, then: // Load JSON const metadata = require('./instagram-backup-metadata.json'); // Find all #travel videos const travelVideos = metadata.videos.filter(v => v.tagList.includes('travel') ); console.log(Found ${travelVideos.length} travel videos); Find Videos by Date Range const startDate = new Date('2026-01-01'); const endDate = new Date('2026-01-31'); const videosInRange = metadata.videos.filter(v => { const videoDate = new Date(v.timestamp); return videoDate >= startDate && videoDate <= endDate; }); Generate Reports Import JSON into Google Sheets or Excel to analyze: Most used hashtags Videos per month Backup coverage percentage Videos by account (if using multi-account setup) Migrate to Another Platform The JSON catalog includes permalinks and timestamps, making it easy to: Re-upload to YouTube, TikTok, etc. Generate video sitemap for website Create video archive with searchable metadata Known Limitations Only videos: Doesn't backup images or carousel posts (by design) Token expiration: Instagram tokens expire after 60 days, requires manual refresh Storage limits: Google Drive free tier is 15GB No analytics: Doesn't track views, likes, or comments Single folder: All videos in one folder (can be customized, see Advanced Customization) Caption parsing: Assumes first hashtag splits title/description (customizable) Data Privacy Videos are downloaded to n8n temporarily, then uploaded to Google Drive n8n doesn't permanently store video files Metadata JSON contains only public Instagram data Google Drive files are private to your account Instagram access token is encrypted by n8n credentials system Account ID is public data from Instagram Graph API Version History v1.0** (2026-02-01): Initial release Daily automatic backups Google Drive storage JSON metadata catalog with smart caption parsing Multi-account support via accountId Deduplication via Data Tables Title/description/tagList extraction Related Workflows Upload from Instagram to YouTube**: Cross-post videos to YouTube with metadata Instagram to X**: Share posts to Twitter/X Instagram Account Information Tracker**: Track follower metrics and insights over time Additional Resources Instagram Graph API Documentation Google Drive API Documentation n8n Data Tables Guide Instagram Access Token Setup Support If you encounter issues: Check Troubleshooting section above Review n8n execution logs for error details Verify all credentials are active and have required permissions Test with Manual Trigger before relying on Schedule Trigger Check "Parse Caption" node output if title/description is incorrect
by Madame AI
Automated B2B Lead Generation from Google Maps to Google Sheets using BrowserAct This n8n template automates local lead generation by scraping Google Maps for businesses, saving them to Google Sheets, and notifying you in real-time via Telegram. This workflow is perfect for sales teams, marketing agencies, and local B2B services looking to build targeted lead lists automatically. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually. You can set the Location, Bussines_Category, and number of leads (Extracted_Data) in the first BrowserAct node. A BrowserAct node ("Run a workflow task") initiates the scraping job on Google Maps using your specified criteria. A second BrowserAct node ("Get details of a workflow task") pauses the workflow and waits for the scraping task to be 100% complete. A Code node takes the raw JSON string output from the scraper and correctly parses it, splitting the data into individual items (one for each business). A Google Sheets node appends or updates each lead into your spreadsheet, matching on the "Name" column to prevent duplicate entries. Finally, a Telegram node sends a message with the new lead's details to your specified chat, providing instant notification. Requirements BrowserAct** API account for web scraping BrowserAct* "Google Maps Local Lead Finder*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for saving leads Telegram** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase AUTOMATE Local Lead Generation: Google Maps to Sheets & Telegram with n8n
by Lorena
This workflow synchronizes data one-way from Pipedrive to HubSpot. Cron node** schedules the workflow to run every minute. Pipedrive* and *Hubspot1 nodes pull in both lists of persons from Pipedrive and contacts from HubSpot. Merge node* with the option *Remove Key Matches identifies the items that uniquely exist in Pipedrive. Hubspot2 node** takes those unique items and adds them to HubSpot.
by Lorena
This workflow extracts text from images sent in a Telegram chat and uploads the images to AWS S3. Telegram Trigger node** triggers the workflow when an image is sent in a Telegram channel. AWS S3 node** uploads the sent image to an S3 bucket. AWS Textract node** extracts text from the image. Airtable node** adds the extracted text and image information to a table.
by Harshil Agrawal
This workflow handles the incoming call from Twitter and sends the required response for verification. On registering the webhook with the Twitter Account Activity API, Twitter expects a signature in response. Twitter also randomly ping the webhook to ensure it is active and secure. Webhook node: Use the displayed URL to register with the Account Activity API. Crypto node: In the Secret field, enter your API Key Secret from Twitter. Set node: This node generates the response expected by the Twitter API. Learn more about connecting n8n with Twitter in the Getting Started with Twitter Webhook article.
by mohamed ali
This workflow creates an automatic self-hosted URL shortener. It consists of three sub-workflows: Short URL creation for extracting the provided long URL, generating an ID, and saving the record in the database. It returns a short link as a result. Redirection for extracting the ID value, validating the existence of its correspondent record in the database, and returning a redirection page after updating the visits (click) count. Dashboard for calculating simple statistics about the saved record and displaying them on a dashboard. Read more about this use case and how to set up the workflow in the blog post How to build a low-code, self-hosted URL shortener in 3 steps. Prerequisites A local proxy set up that redirects the n8n.ly domain to your n8n instance An Airtable account and credentials Basic knowledge of JavaScript, HTML, and CSS Nodes Webhook nodes trigger the sub-workflows on calls to a specified link. IF nodes route the workflows based on specified query parameters. Set nodes set the required values returned by the previous nodes (id, longUrl, and shortUrl). Airtable nodes retrieve records (values) from or append records to the database. Function node calculates statistics on link clicks to be displayed on the dashboard, as well as its design. Crypto node generates a SHA256 hash.
by Lorena
This workflow posts a poem translated into English every day in a Telegram chat. Cron node: triggers the workflow every day at 10:00. You can change the time and interval based on your use case. HTTP Request node: makes an HTTP request to the Poemist API that returns a random poem. LingvaNex node: translates the returned poems into English. Telegram node: takes in the translated poem and posts it in the chat.
by Lorena
This workflow automatically promotes your new Shopify products on Twitter and Telegram. This workflow is also featured in the blog post 6 e-commerce workflows to power up your Shopify store. Prerequisites A Shopify account and credentials A Twitter account and credentials A Telegram account and credentials for the channel you want to send messages to. Nodes Shopify Trigger node triggers the workflow when you create a new product in Shopify. Twitter node posts a tweet with the text "Hey there, my design is now on a new product! Visit my {shop name} to get this cool {product title} (and check out more {product type})". Telegram node posts a message with the same text as above in a Telegram channel.
by Harshil Agrawal
This workflow enriches the information of a new contact that gets added to HubSpot. HubSpot Trigger: This node triggers the workflow when a new contact gets added to HubSpot. Get Contact: This node fetches the information of the new contact. Clearbit: This node returns the data of the person and the company associated with the email address. Update Contact: This node will update the contact with the information returned by the Clearbit node. Based on your use case, you can select which fields you want to update.