by Rapiwa
Who is this for? This workflow is for Shopify store owners, customer success, and marketing teams who want to automatically check customers’ WhatsApp numbers and send personalized messages with discount codes for canceled orders. It helps recover lost sales by reaching out with special offers. What This Workflow Does Automatically checks for canceled orders on a schedule Fetches canceled orders from Shopify Creates personalized recovery messages based on customer data Verifies customers’ WhatsApp numbers via Rapiwa Logs results in Google Sheets: “Verified & Sent” for successful messages, “Unverified & Not Sent” for unverified numbers Requirements Shopify store with API access enabled Shopify API credentials with access to orders and customer data Rapiwa account and a valid Bearer token Google account with Sheets access and OAuth2 credentials Setup plan Add your credentials Rapiwa: Create an HTTP Bearer credential in n8n and paste your token (example name: Rapiwa Bearer Auth). Google Sheets: Add an OAuth2 credential (example name: Google Sheets). Set up Shopify Replace your_shopify_domain with your real Shopify domain. Replace your_shop_access-token with your actual Shopify API token. Set up Google Sheets Update the example spreadsheet ID and sheet gid with your own. Make sure your sheet’s column headers match the mapping keys exactly—same spelling, case, and no extra spaces. Configure the Schedule Trigger Choose how often you want the workflow to check for canceled orders (daily, weekly, etc.). Check the HTTP Request nodes Verify endpoint: Should call Rapiwa’s verifyWhatsAppNumber. Send endpoint: Should use Rapiwa’s send-message API with your template (includes customer name, reorder link, discount code). Google Sheet Column Structure The Google Sheets nodes in the flow append rows with these column. Make sure the sheet headers match exactly. A Google Sheet formatted like this ➤ sample | Name | Number | Item Name | Coupon | Item Link | Validity | Status | | ------------ | ------------- | ------------------------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------ | | Abdul Mannan | 8801322827799 | Samsung Galaxy S24 Ultra 5G 256GB-512GB-1TB | REORDER5 | Re-order Link | verified | sent | | Abdul Mannan | 8801322827790 | Samsung Galaxy S24 Ultra 5G 256GB-512GB-1TB | REORDER5 | Re-order Link | unverified | not sent | Important Notes Do not hard-code API keys or tokens; always use n8n credentials. Google Sheets column header names must match the mapping keys used in the nodes. Trailing spaces are common accidental problems — trim them in the spreadsheet or adjust the mapping. Message templates reference - update templates if you need to reference different data. The workflow processes cancelled orders in batches to avoid rate limits. Adjust the batch size if needed. Useful Links Install Rapiwa**: How to install Rapiwa Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Shopify API Documentation:** https://shopify.dev/docs/admin-api Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Yaron Been
Amazon Competitive Gap & Assortment Intelligence Workflow Description This workflow automatically scrapes competitor product data from Amazon and identifies gaps in your assortment, pricing, and positioning. It helps merchandising and product teams spot opportunities they are missing before competitors fill them. Overview This workflow uses Bright Data to scrape Amazon product pages, then normalizes the data and feeds it to AI for competitive gap analysis. It identifies: Missing product variants Bundle expansion ideas Positioning gaps Pricing weaknesses Each opportunity is scored and prioritized — high-impact gaps are routed to dedicated sheets, while standard opportunities are logged separately. All results are sent to Google Sheets dashboards for structured decision-making. Tools Used n8n**: Automation platform that orchestrates the workflow Bright Data**: Scrapes Amazon product data at scale without getting blocked OpenRouter**: AI-powered competitive clustering, gap detection, and opportunity scoring Google Sheets**: Logs missing variants, bundle opportunities, pricing gaps, and errors How to Install 1. Import the Workflow Download the .json file and import it into your n8n instance. 2. Configure Bright Data Add your Bright Data API credentials to the Bright Data node. 3. Configure OpenRouter Add your OpenRouter API key for AI competitive analysis. 4. Set Up Google Sheets Create a spreadsheet following the "Google Sheets Setup" sticky note inside the workflow. Connect each Google Sheets node to your document. 5. Customize Edit the configuration node to define: Target Amazon product URL Category scope Competitive depth Opportunity scoring thresholds Use Cases Merchandising Teams Discover product variants competitors carry that are missing from your catalog. Pricing Analysts Detect pricing gaps and positioning weaknesses relative to competitors in your category. Product Managers Find bundle and cross-sell opportunities based on real competitive data. Category Managers Track assortment gaps across an entire product category to prioritize expansion. Ecommerce Strategy Build a data-driven competitive intelligence layer for smarter assortment and pricing decisions. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) Tags #n8n #automation #brightdata #webscraping #competitiveanalysis #pricingintelligence #assortmentplanning #ecommerce #amazondata #productgaps #pricingstrategy #competitortracking #merchandising #bundleopportunities #n8nworkflow #workflow #nocode #businessintelligence #marketresearch #pricingoptimization #categorymanagement #retailintelligence #competitivelandscape #productexpansion #ecommerceautomation
by Yves Tkaczyk
Automated image processing for e-commerce product catalog Use cases Monitor a Google Drive folder, process each image based on the prompt defined in Workflow Configuration and save the new image to the specified output Google Drive folder. Maintain a processing log in Google Sheets. 👍 This use case can be extended to any scenario requiring batch image processing, for example, unifying the look and feel of team photos on a company website. How it works Trigger: Watches a Google Drive folder for new or updated files. Downloads the image, processes it using Google Gemini (Nano Banana), and uploads the new image to the specified output folder. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (6 nodes total) to use these credentials Gemini AI node: Create Google Gemini(PaLM) Api credentials. Read more about Google Gemini(PaLM) credentials. Update the Edit Image node to use the Gemini Api credentials. Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration (see right ➡️). Ensure the spreadsheet can be accessed as Editor by the account used for the Google Credentials. Create input and output directories in Google Drive. Ensure these directories are accessible by the account used for the credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes (see right ➡️). Requirements Google account with Google API access Google AI Studio account with ability to create a Google Gemini API key. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone wanting to batch process images for product catalog. Other use cases are applicable. Please reach out reach out if you need help customizing this workflow. 🔒 Security All credentials are stored securely using n8n's credential system. The only potentially sensitive information stored in the workflow is the Google Drive folder and Sheet IDs. These should be secured according to your organization’s needs. Need Help? Reach out on LinkedIn or Ask in the Forum!
by TAKUTO ISHIKAWA
Judge AI math RPG answers and update quest status in Google Sheets Who it's for This template is for educators, parents, or self-learners who want to gamify their study routines. This is Part 2 of the "AI Math RPG" system. It handles the quiz judgment and status updates without using expensive AI tokens for basic math checks. How it works When a user submits their answer via an n8n Form, this workflow searches Google Sheets for their pending quest. It uses a fast and reliable IF node to check if the user's answer matches the correct answer generated previously. If correct, it updates the quest status to solved in Google Sheets to prevent infinite EXP farming, and then uses a Basic LLM Chain to generate an enthusiastic, RPG-style victory fanfare. If incorrect, it returns a friendly "try again" message. How to set up Ensure you have set up Part 1 of this system (Generate AI math RPG quests from study logs). Connect your Google Sheets credential and replace ENTER_YOUR_SPREADSHEET_ID_HERE with your actual Sheet ID. Connect your OpenAI or OpenRouter credential in the LLM node. Open the "Quiz Answer Form" and enter your user ID and answer to test the battle! Requirements A Google account (for Google Sheets) An OpenAI or OpenRouter API key How to customize the workflow You can easily customize the "Generate Victory Message" prompt to match different themes, like a sci-fi battle, a magic school, or historical events!
by Ejaz
How it works Run workflow on schedule** fires on a set interval to pull Reddit accounts from a Google Sheets spreadsheet (filtered to exclude shadowbanned accounts) Smart action calculator* randomly selects 3–8 accounts and decides whether each should *post* or *comment**, respecting cooldown timers (1–3hr gap for posts, 30–120min gap for comments) and only operating during active hours (midnight–noon) IP validation loop** routes each account through a proxy, verifies the IP against the account's creation IP using httpbin, and skips if there's a match (to avoid fingerprint overlap) Multilogin browser profile launch** opens an anti-detect browser session per account via the Multilogin API, then connects to the browser's DevTools WebSocket AI Agent (DeepSeek + Browser MCP)** autonomously navigates Reddit, reads subreddit rules, scans recent posts, and either creates a new text post or writes a context-aware comment — all with human-like scroll behavior and natural language Post-action processing** parses the AI's output to extract karma stats, permalinks, and success/failure status, then updates the Google Sheet with timestamps, karma totals, and links Profile cleanup** closes the Multilogin browser profile after each account finishes, then loops to the next account Setup steps ~20 minutes** to configure all credentials and services Connect your Google Sheets service account and point it to your Reddit accounts spreadsheet (columns: multilogin_profile_id, proxy_provider, shadowban?, account_id, account_password, creation_ip, karma, posts_made_today, comments_made_today, time_of_post, time_of_comment, last_allocated_ip, posts_links, comments_links, row_number) Set up your Multilogin API bearer token credential and update the folder ID in the "Open Multilogin Profile" node URL Add your DeepSeek API credential for both AI Agent model nodes Install the Browser MCP community node (n8n-nodes-browser-mcp) and ensure the MCP server is running at the configured baseUrl Update the proxy URL in the "Get Proxy Exit IP" HTTP Request node with your actual proxy credentials Adjust the Run workflow on schedule interval to your desired frequency Review the sticky notes inside the workflow for detailed logic explanations
by Aziz dev
Description This workflow automates the daily reporting of Google Ads campaign performance. It pulls click and conversion data from the Google Ads API, merges both datasets, and stores the results into Notion databases and Google Sheets. It includes a campaign-level log and a daily performance summary. The workflow is triggered automatically every day at 08:00 AM, helping marketing teams maintain a consistent and centralized reporting system without manual effort. How It Works Scheduled Trigger at 08:00 AM The workflow begins with a Schedule Trigger node that runs once per day at 08:00. Set Yesterday’s Date The Set node defines a variable for the target date (yesterday), which is used in the API queries. Query Google Ads API – Clicks & Cost The first HTTP request pulls campaign-level metrics: campaign.id, campaign.name metrics.clicks, metrics.impressions, metrics.cost_micros Query Google Ads API – Conversions The second HTTP request pulls conversion-related data: metrics.conversions, segments.conversion_action_name Split and Merge Both responses are split into individual campaign rows and merged using: campaign.id segments.date Store Campaign-Level Data Stored in Notion database: "Google Ads Campaign Tracker" Appended to Google Sheets tab: "Campaign Daily Report" Generate Daily Summary A code node calculates daily totals across all campaigns: Total impressions, clicks, conversions, cost Unique conversion types The summary is stored in: Notion database: "Google Ads Daily Summary" Google Sheets tab: "Summary Report" Setup Steps 1. Schedule the Workflow The workflow is triggered using a Schedule Trigger node Set the schedule to run every day at 08:00 AM Connect it to the Set Yesterday Date node 2. Google Ads API Access Create a Google Ads developer account and obtain a developer token Set up OAuth2 credentials with Google Ads scope In n8n, configure the Google Ads OAuth2 API credential Ensure HTTP request headers include: developer-token login-customer-id Content-Type: application/json 3. Notion Database Setup Create two databases in Notion: Google Ads Campaign Tracker** Fields: Campaign Name, Campaign ID, Impressions, Clicks, Cost, Conversion Type, Conversions, Date Google Ads Daily Summary** Fields: Date, Total Impressions, Total Clicks, Total Conversions, Total Cost, Conversion Types Share both databases with your Notion integration 4. Google Sheets Setup Create a spreadsheet with two tabs: Campaign Daily Report → for campaign-level rows Summary Report → for daily aggregated metrics Match all column headers to the workflow fields Connect your Google account to n8n using Google Sheets OAuth2 Output Summary Notion Databases: Google Ads Campaign Tracker: stores individual campaign metrics Google Ads Daily Summary: stores daily totals and conversion types Google Sheets Tabs: Campaign Daily Report: per-campaign data Summary Report: aggregated daily performance
by Richard Nijsten
Generate Google Spreadsheets Testscript with AI using Pega Agile Studio When working as a functional Pega Software tester, this workflow will create a Google Spreadsheet with acceptance criteria and testcases based on the Pega Agile Studio userstory provided. This improves speed and efficiency while working in sprints on new functionalities. Who's it for If you are working as a software tester using the Pega Platform including Pega Agile Studio. How it works When the user chats an userstory in the format "US-1234", a HTTP Request will be made to Pega Agile Studio to retrieve the Userstory and commence creating a Google Spreadsheet. It will add the acceptance criteria on a seperate sheet for traceability. Next the AI will create testscases based on the Userstory provided. In the end, a small cleanup will be performed to remove duplicate rows/data created by the AI. You will have a Google Spreadsheet file in your My Drive containing your testcases! How to set up In the Chat, provide the userstory where you want to create a testscript for, in the format "US-1234". Add you OAuth2 Api for Agile Studio, so you can access the Pega Agile Studio through API calls. Requirements Access to Pega Agile Studio OAuth2 Api. AI API. Access to Google Cloud for the Google API's
by vinci-king-01
Subscription Renewal Reminder – Telegram & Supabase This workflow tracks upcoming subscription expiry dates stored in Supabase and automatically sends personalized renewal-reminder messages to each customer via Telegram. It is designed to be triggered by an HTTP Webhook (manually or on a schedule) and ensures that customers are notified a configurable number of days before their subscription lapses. > Community Template Disclaimer > This is a community-contributed n8n workflow template. It is provided “as-is” without official support from n8n GmbH. Always test thoroughly before using in production. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) Supabase project with a subscriptions table (id, customer_name, expiration_date, telegram_chat_id, notified) A Telegram Bot created via @BotFather Outbound HTTPS access from n8n to api.telegram.org and your Supabase project REST endpoint Required Credentials Supabase Service Role Key** – Full access for reading/writing the subscriptions table Telegram Bot Token** – To send messages from your bot n8n Webhook URL** – Auto-generated when you activate the workflow (ScrapeGraphAI API Key is *not* required for this non-scraping workflow.) Specific Setup Requirements | Environment Variable | Example Value | Purpose | |----------------------|--------------|---------| | SUPABASE_URL | https://xyzcompany.supabase.co | Base URL for Supabase REST API | | SUPABASE_KEY | eyJhbGciOiJI... | Service Role Key | | TELEGRAM_TOKEN | 609012345:AA... | Bot token obtained from BotFather | | REMINDER_DAYS | 3 | Days before expiry to notify | How it works This workflow tracks upcoming subscription expiry dates stored in Supabase and automatically sends personalized renewal-reminder messages to each customer via Telegram. It is triggered by an HTTP Webhook (manually or via external scheduler) and ensures that customers are notified a configurable number of days before their subscription lapses. Key Steps: Receive Trigger (Webhook)**: External call fires the workflow or an internal Cron node can be added. Set Static Parameters**: The Set node calculates “today + REMINDER_DAYS”. Query Supabase**: Fetch all subscriptions expiring on or before the calculated date and not yet notified. Branch Logic (If node)**: Check if any subscriptions were returned. Loop & Dispatch (Code + Telegram nodes)**: Iterate over each customer row, compose a message, and send via Telegram. Flag as Notified (Supabase Update)**: Update each processed row to prevent duplicate reminders. Respond to Webhook**: Return a concise JSON summary for logging or downstream integrations. Set up steps Setup Time: 15–20 minutes Create Telegram Bot a. Open Telegram and talk to @BotFather → /newbot b. Copy the given bot token; paste it into n8n Telegram credentials. Prepare Supabase a. Create a table named subscriptions with columns: id (uuid), customer_name (text), expiration_date (date), telegram_chat_id (text), notified (bool, default false) b. Obtain the Service Role Key from Project Settings → API. Import the Workflow a. In n8n, click Templates → Import and select “Subscription Renewal Reminder – Telegram & Supabase”. b. Replace placeholder credentials in the Supabase and Telegram nodes. Define Environment Variables (Optional but recommended) Add SUPABASE_URL, SUPABASE_KEY, TELEGRAM_TOKEN, and REMINDER_DAYS in Settings → Environment Variables for easy maintenance. Activate the Workflow Copy the production webhook URL and (optionally) set up a cron job or n8n Cron node to hit it daily. Node Descriptions Core Workflow Nodes: Webhook** – Entry point; triggers the workflow via HTTP request. Set (Calculate Target Date)** – Defines targetDate = today + REMINDER_DAYS. Supabase (Select)** – Retrieves expiring subscriptions that haven’t been notified. If (Rows > 0?)** – Determines whether to continue or exit early. Code (For-Each Loop)** – Iterates through each returned row to send messages and update status. Telegram** – Sends a personalized renewal reminder to the customer’s chat. Supabase (Update)** – Flags the subscription row as notified = true. Respond to Webhook** – Returns a JSON summary with counts of sent messages. Sticky Notes** – Inline documentation for maintainers (non-executable). Data Flow: Webhook → Set → Supabase (Select) → If → Code → Telegram → Supabase (Update) → Respond to Webhook Customization Examples Send Slack Notifications Instead of Telegram // Replace Telegram node with Slack node const message = Hi ${item.customer_name}, your subscription expires on ${item.expiration_date}.; return [{ text: message, channel: item.slack_channel_id }]; Notify 7 Days & 1 Day Before Expiry // In Set node items[0].json.reminderOffsets = [7, 1]; // days return items; Data Output Format The workflow outputs structured JSON data: { "totalSubscriptionsChecked": 42, "remindersSent": 13, "timestamp": "2024-05-27T09:15:22.000Z" } Troubleshooting Common Issues No messages sent – Check the If node; ensure REMINDER_DAYS is set correctly and the Supabase query returns rows. Telegram error 403 – The user hasn’t started a chat with your bot. Ask the customer to click “Start” in Telegram. Performance Tips Batch database updates instead of row-by-row when dealing with thousands of records. Cache Supabase responses if you expect multiple workflows to query the same data within seconds. Pro Tips: Use the Cron node inside n8n instead of external schedulers for a fully self-contained setup. Add an Email node after the Telegram node for multi-channel reminders. Store template messages in Supabase so non-developers can update wording without editing the workflow.
by Lucio
Instagram Video Backup to Google Drive Automatically backup all your Instagram videos to Google Drive with a searchable metadata catalog in JSON format. What It Does This workflow provides a complete backup solution for your Instagram video content with intelligent caption parsing: Fetches your Instagram account ID and videos (VIDEO and REELS types) Parses captions into structured fields: Title: Everything before the first hashtag Description: Everything after the first hashtag (includes all tags) Tag List: All hashtags extracted as an array Description Full: Complete original caption text Downloads videos in maximum available quality from Instagram Uploads videos to a designated Google Drive folder Creates/updates a JSON metadata file with all video details Prevents duplicates using n8n Data Tables with account-level filtering Key Features Account-Level Tracking: The Data Table includes accountId so you can use the same table across multiple Instagram accounts. Each account's videos are tracked separately. Smart Caption Parsing: Automatically splits Instagram captions into title (before first #) and description (all hashtags and text after), with full text preserved in descriptionFull. Portable Catalog: The JSON file is stored in Google Drive alongside your videos, making it accessible anywhere without needing n8n. Maximum Quality: Uses Instagram Graph API's media_url field for highest available quality. Hashtag Extraction: Automatically extracts all hashtags into an array for easy filtering and analysis. Workflow Architecture Section 1: Fetch & Filter Get Instagram Account Info → Configuration → Fetch Media → Split Out Items → Filter Videos Only Get Instagram Account Info**: Fetches your Instagram account ID and username Configuration**: Stores account ID, Google Drive folder ID, and settings Fetch Media**: Gets up to 100 media items from Instagram Split Out Items**: Separates each media item for individual processing Filter Videos Only**: Keeps only VIDEO and REELS types (skips images) Section 2: Process Videos Check If Backed Up → IF Not Backed Up → Wait → Parse Caption → Download → Upload → Extract Metadata → Save Record → Aggregate For each video: Check If Already Backed Up: Queries Data Table by postId to avoid duplicates IF Not Already Backed Up: Skips if video already exists Wait: 5-second delay between downloads (prevents API rate limits) Parse Caption: Splits caption into title, description, tagList, descriptionFull Download Video: Downloads video file from Instagram to memory Upload to Google Drive: Uploads video to configured folder Extract Metadata: Creates structured metadata object with all fields Save Backup Record: Stores accountId, postId, googleDriveFileId, backedUpAt in Data Table Aggregate: Collects all new video metadata for JSON update Section 3: Update JSON Catalog End Loop → Download Existing JSON → Update JSON → Upload Updated JSON After all videos processed: Download Existing Metadata JSON: Gets current JSON file from Google Drive (if exists) Update Metadata JSON: Appends new video metadata to existing catalog Upload Updated Metadata JSON: Saves updated JSON back to Google Drive Setup Steps 1. Create Google Drive Folder Go to Google Drive Create a new folder named Instagram Video Backups (or any name you prefer) Open the folder and copy the Folder ID from the URL: https://drive.google.com/drive/folders/1ABC123xyz... ^^^^^^^^^^^ This is your Folder ID 2. Create n8n Data Table Create a Data Table for deduplication tracking with account-level support: Table Name: Instagram Video Backups Schema: | Field Name | Type | Description | |------------|------|-------------| | accountId | string | Instagram account ID (allows multi-account use) | | postId | string (Primary Key) | Instagram post ID | | googleDriveFileId | string | Google Drive file ID for the video | | backedUpAt | string | ISO timestamp of backup | Why accountId? This allows you to use the same Data Table for multiple Instagram accounts. Each account's videos are tracked separately, preventing conflicts. 3. Configure Credentials You'll need two credential sets: Instagram Graph API (HTTP Bearer Auth) In n8n, create new credential: HTTP Bearer Auth Set header name: Authorization Set header value: Bearer YOUR_INSTAGRAM_ACCESS_TOKEN Name it: Instagram Graph API Getting Instagram Access Token: Follow Meta's Business Account setup guide Required permissions: instagram_graph_user_media Tokens expire after 60 days (requires manual refresh) Google Drive OAuth2 In n8n, create new credential: Google Drive OAuth2 API Follow OAuth flow to authorize your Google account Name it: Google Drive Account 4. Update Configuration Node In the workflow, open the Configuration node and update: { "googleDriveFolderId": "PASTE_YOUR_FOLDER_ID_HERE", "maxVideosPerRun": 100, "waitBetweenDownloads": 5, "metadataFileName": "instagram-backup-metadata.json" } Settings Explained: googleDriveFolderId: The folder ID you copied in step 1 maxVideosPerRun: Max videos to process per run (100 is safe for API limits) waitBetweenDownloads: Seconds to wait between downloads (prevents rate limits) metadataFileName: Name of the JSON catalog file in Google Drive Note: accountId and accountUsername are automatically populated from Instagram API. 5. Test & Activate Click Manual Trigger to test the workflow Check Google Drive folder for: Video files named instagram_{postId}.mp4 JSON file named instagram-backup-metadata.json Verify Data Table has records with accountId and postId Activate the Schedule Trigger for daily automatic backups Metadata JSON Structure The JSON file stored in Google Drive has this structure: { "lastUpdated": "2026-02-01T10:00:00Z", "totalVideos": 42, "videos": [ { "accountId": "17841400123456789", "instagramId": "123456789", "permalink": "https://instagram.com/p/ABC123", "title": "Amazing sunset at the beach!", "description": "#travel #nature #sunset", "tagList": ["travel", "nature", "sunset"], "descriptionFull": "Amazing sunset at the beach! #travel #nature #sunset", "timestamp": "2026-01-15T08:30:00Z", "mediaType": "VIDEO", "googleDriveFileId": "1ABC123xyz...", "googleDriveFileName": "instagram_123456789.mp4", "backedUpAt": "2026-02-01T10:00:00Z" } ] } Field Descriptions accountId**: Instagram account ID (from Graph API /me endpoint) instagramId**: Instagram post ID (unique identifier) permalink**: Direct link to Instagram post title**: Caption text before the first hashtag description**: Caption text from first hashtag onward (includes all tags) tagList**: Array of hashtags without the # symbol descriptionFull**: Complete original caption (preserves full text) timestamp**: When the video was originally posted to Instagram mediaType**: VIDEO or REELS googleDriveFileId**: Google Drive file ID (use to access file via Drive API) googleDriveFileName**: Filename in Google Drive (instagram_{postId}.mp4) backedUpAt**: When the video was backed up (ISO timestamp) Caption Parsing Logic The Parse Caption Code node splits Instagram captions intelligently: Example Caption: "Amazing sunset at the beach! 🌅 #travel #nature #sunset" Parsed Fields: title**: "Amazing sunset at the beach! 🌅" description**: "#travel #nature #sunset" tagList**: ["travel", "nature", "sunset"] descriptionFull**: "Amazing sunset at the beach! 🌅 #travel #nature #sunset" Edge Cases: No hashtags**: Entire caption becomes title, description is empty Hashtag at start**: title is empty, entire caption becomes description Multiple lines**: Preserves all line breaks in descriptionFull Multi-Account Usage Using the same Data Table for multiple accounts: Import this workflow multiple times (once per Instagram account) Configure each workflow with different Instagram credentials Use the same Data Table name in all workflows: Instagram Video Backups Each workflow automatically filters by its own accountId Benefits: Single deduplication table for all accounts Easy to query all backups across accounts Prevents conflicts between accounts with same post IDs Querying specific account backups: // In Data Table or external script const accountBackups = allBackups.filter( backup => backup.accountId === "17841400123456789" ); API Quotas & Limits Instagram Graph API Rate Limits**: 200 calls/hour per user token (standard) This Workflow**: 2 calls total (1 for account info, 1 for media fetch) Impact**: Can run safely within free tier limits Google Drive API Rate Limits**: 1,000 requests per 100 seconds per user This Workflow**: 2 calls per video (upload video + final JSON update) Impact**: 100 videos = ~200 calls, well within limits Recommended Schedule Daily (midnight)**: Default, safe for most accounts Weekly**: Good for accounts with infrequent posting Manual**: On-demand backups when needed Troubleshooting No videos are being backed up Check Instagram credentials: Open "Get Instagram Account Info" node Click "Execute Node" Look for error messages about authentication Verify account has videos: Instagram Graph API only returns VIDEO and REELS Won't backup images or carousels (by design) accountId is empty in Data Table Account info fetch failed: Check Instagram credentials have correct permissions Verify token hasn't expired (60-day limit) Test "Get Instagram Account Info" node separately JSON file has wrong title/description Caption parsing issue: Open "Parse Caption" Code node Check the output to see parsed fields Verify caption has hashtags (if no hashtags, entire caption becomes title) Custom parsing logic: Edit the "Parse Caption" Code node to adjust splitting logic: // Current: splits at FIRST hashtag const firstHashtagIndex = caption.indexOf('#'); // Alternative: split at specific word const splitWord = 'DESCRIPTION:'; const splitIndex = caption.indexOf(splitWord); Duplicate videos in Google Drive Data Table issues: Verify table name is exactly: Instagram Video Backups Check table has postId as primary key Verify accountId field exists Workflow execution failed mid-run: If workflow fails after upload but before saving to Data Table, video won't be tracked Safe to delete duplicate video in Google Drive and re-run Rate limit errors Instagram rate limits: Reduce maxVideosPerRun to 50 or 25 Increase waitBetweenDownloads to 10 seconds Google Drive rate limits: Unlikely with default settings If occurs, reduce maxVideosPerRun Caption has special characters (emojis, line breaks) Emojis preserved: All emojis are preserved in descriptionFull May appear in title or description depending on position Line breaks: Line breaks are preserved in descriptionFull May affect title/description split if hashtags are on new lines Advanced Customization Change Backup Folder Update googleDriveFolderId in Configuration node to any Google Drive folder ID. Change Schedule Edit the Schedule Trigger node: Daily midnight: 0 0 * * * (default) Every 12 hours: 0 */12 * * * Weekly Sunday: 0 0 * * 0 Custom: Use crontab.guru to generate expression Organize Videos by Date To create monthly subfolders (e.g., 2026-02/video.mp4): Before "Upload to Google Drive" node, add "Google Drive - Create Folder" node Folder name: ={{ $now.format('yyyy-MM') }} Parent folder: ={{ $('Configuration').item.json.googleDriveFolderId }} Update upload node to use created folder ID Download Videos Locally Too To keep local copies in addition to Google Drive: After "Download Video" node, add Write Binary File node File path: /path/to/backup/{{ $('Extract Metadata').item.json.googleDriveFileName }} Connect in parallel with "Upload to Google Drive" Custom Caption Parsing To use different title/description split logic: Option 1: Split at specific keyword const splitKeyword = 'DESCRIPTION:'; const splitIndex = caption.indexOf(splitKeyword); if (splitIndex === -1) { title = caption.trim(); description = ''; } else { title = caption.substring(0, splitIndex).trim(); description = caption.substring(splitIndex + splitKeyword.length).trim(); } Option 2: Use first sentence as title const sentenceEnd = caption.match(/[.!?]/); const endIndex = sentenceEnd ? caption.indexOf(sentenceEnd[0]) + 1 : -1; if (endIndex === -1) { title = caption.trim(); description = ''; } else { title = caption.substring(0, endIndex).trim(); description = caption.substring(endIndex).trim(); } Filter by Account in JSON To create separate JSON files per account: Update "Update Metadata JSON" Code node to filter by accountId Change metadataFileName to include account username: instagram-backup-{{ $('Configuration').item.json.accountUsername }}.json Use Cases Search Videos by Hashtag Download the JSON file from Google Drive, then: // Load JSON const metadata = require('./instagram-backup-metadata.json'); // Find all #travel videos const travelVideos = metadata.videos.filter(v => v.tagList.includes('travel') ); console.log(Found ${travelVideos.length} travel videos); Find Videos by Date Range const startDate = new Date('2026-01-01'); const endDate = new Date('2026-01-31'); const videosInRange = metadata.videos.filter(v => { const videoDate = new Date(v.timestamp); return videoDate >= startDate && videoDate <= endDate; }); Generate Reports Import JSON into Google Sheets or Excel to analyze: Most used hashtags Videos per month Backup coverage percentage Videos by account (if using multi-account setup) Migrate to Another Platform The JSON catalog includes permalinks and timestamps, making it easy to: Re-upload to YouTube, TikTok, etc. Generate video sitemap for website Create video archive with searchable metadata Known Limitations Only videos: Doesn't backup images or carousel posts (by design) Token expiration: Instagram tokens expire after 60 days, requires manual refresh Storage limits: Google Drive free tier is 15GB No analytics: Doesn't track views, likes, or comments Single folder: All videos in one folder (can be customized, see Advanced Customization) Caption parsing: Assumes first hashtag splits title/description (customizable) Data Privacy Videos are downloaded to n8n temporarily, then uploaded to Google Drive n8n doesn't permanently store video files Metadata JSON contains only public Instagram data Google Drive files are private to your account Instagram access token is encrypted by n8n credentials system Account ID is public data from Instagram Graph API Version History v1.0** (2026-02-01): Initial release Daily automatic backups Google Drive storage JSON metadata catalog with smart caption parsing Multi-account support via accountId Deduplication via Data Tables Title/description/tagList extraction Related Workflows Upload from Instagram to YouTube**: Cross-post videos to YouTube with metadata Instagram to X**: Share posts to Twitter/X Instagram Account Information Tracker**: Track follower metrics and insights over time Additional Resources Instagram Graph API Documentation Google Drive API Documentation n8n Data Tables Guide Instagram Access Token Setup Support If you encounter issues: Check Troubleshooting section above Review n8n execution logs for error details Verify all credentials are active and have required permissions Test with Manual Trigger before relying on Schedule Trigger Check "Parse Caption" node output if title/description is incorrect
by Madame AI
Automated B2B Lead Generation from Google Maps to Google Sheets using BrowserAct This n8n template automates local lead generation by scraping Google Maps for businesses, saving them to Google Sheets, and notifying you in real-time via Telegram. This workflow is perfect for sales teams, marketing agencies, and local B2B services looking to build targeted lead lists automatically. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually. You can set the Location, Bussines_Category, and number of leads (Extracted_Data) in the first BrowserAct node. A BrowserAct node ("Run a workflow task") initiates the scraping job on Google Maps using your specified criteria. A second BrowserAct node ("Get details of a workflow task") pauses the workflow and waits for the scraping task to be 100% complete. A Code node takes the raw JSON string output from the scraper and correctly parses it, splitting the data into individual items (one for each business). A Google Sheets node appends or updates each lead into your spreadsheet, matching on the "Name" column to prevent duplicate entries. Finally, a Telegram node sends a message with the new lead's details to your specified chat, providing instant notification. Requirements BrowserAct** API account for web scraping BrowserAct* "Google Maps Local Lead Finder*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for saving leads Telegram** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase AUTOMATE Local Lead Generation: Google Maps to Sheets & Telegram with n8n
by Harshil Agrawal
This workflow handles the incoming call from Twitter and sends the required response for verification. On registering the webhook with the Twitter Account Activity API, Twitter expects a signature in response. Twitter also randomly ping the webhook to ensure it is active and secure. Webhook node: Use the displayed URL to register with the Account Activity API. Crypto node: In the Secret field, enter your API Key Secret from Twitter. Set node: This node generates the response expected by the Twitter API. Learn more about connecting n8n with Twitter in the Getting Started with Twitter Webhook article.
by mohamed ali
This workflow creates an automatic self-hosted URL shortener. It consists of three sub-workflows: Short URL creation for extracting the provided long URL, generating an ID, and saving the record in the database. It returns a short link as a result. Redirection for extracting the ID value, validating the existence of its correspondent record in the database, and returning a redirection page after updating the visits (click) count. Dashboard for calculating simple statistics about the saved record and displaying them on a dashboard. Read more about this use case and how to set up the workflow in the blog post How to build a low-code, self-hosted URL shortener in 3 steps. Prerequisites A local proxy set up that redirects the n8n.ly domain to your n8n instance An Airtable account and credentials Basic knowledge of JavaScript, HTML, and CSS Nodes Webhook nodes trigger the sub-workflows on calls to a specified link. IF nodes route the workflows based on specified query parameters. Set nodes set the required values returned by the previous nodes (id, longUrl, and shortUrl). Airtable nodes retrieve records (values) from or append records to the database. Function node calculates statistics on link clicks to be displayed on the dashboard, as well as its design. Crypto node generates a SHA256 hash.