by isaWOW
Paste any video URL into the form, choose your target languages, and this workflow handles everything else. It sends the video to WayinVideo AI, which automatically generates short vertical clips with translated subtitles in every language you selected. Each clip is downloaded and saved directly to your Google Drive folder — ready to post on TikTok, Instagram Reels, or YouTube Shorts. Built for content agencies, social media managers, and global brands who want to repurpose videos across multiple languages at scale — without editing a single frame. What This Workflow Does Multilingual clip generation** — Sends your video to WayinVideo once and generates a separate set of AI-clips per language, all in one run Translated subtitles embedded** — Every clip includes AI-generated captions translated into the target language — no manual subtitle work needed Auto-polling with retry** — Waits 45 seconds, checks if clips are ready, and loops back automatically every 30 seconds until all results arrive Batch clip processing** — Extracts every clip from the results — title, score, tags, description, and timestamps — and processes each one individually Automatic file download** — Downloads each clip video file directly from the WayinVideo export link, no manual clicking required Google Drive upload** — Saves every downloaded clip to your specified Google Drive folder, named using the clip's AI-generated title Team-friendly form** — A simple web form lets anyone on your team submit jobs — no n8n knowledge needed Setup Requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) WayinVideo account + API key Google account connected to n8n via OAuth2 Estimated Setup Time: 10–15 minutes Step-by-Step Setup Get your WayinVideo API key Log in at WayinVideo, go to your account settings or developer section, and copy your API key. Paste the API key into node "3. WayinVideo — Submit Clipping Task" Open this node, find the Authorization header, and replace YOUR_WAYINVIDEO_API_KEY with your actual key. Paste the API key into node "5. WayinVideo — Get Clips Result" Open this node, find the same Authorization header, and replace YOUR_WAYINVIDEO_API_KEY again. > ⚠️ This key appears in 2 nodes — you must replace it in both "3. WayinVideo — Submit Clipping Task" and "5. WayinVideo — Get Clips Result" or the workflow will fail. Connect your Google account Open node "10. Google Drive — Upload Clip". Click the credential field and connect your Google account via OAuth2. Follow the on-screen prompts to authorise n8n. Set your Google Drive folder ID In node "10. Google Drive — Upload Clip", find the folderId field. Replace YOUR_GDRIVE_FOLDER_ID with your actual folder ID. To find it: open your target Google Drive folder in a browser — the folder ID is the string of letters and numbers at the end of the URL after /folders/. Activate the workflow Toggle the workflow to Active. Open the form URL generated by node "1. Form — Video URL + Languages" and submit a test video with one or two language codes to confirm everything works end to end. How It Works (Step by Step) Step 1 — Form Trigger (Web Form) The workflow starts when someone fills out the web form. They enter four things: the video URL, the brand or project name, the target languages as comma-separated codes (e.g. en,hi,es,fr), and the number of clips to generate per language. This form is hosted by n8n and can be shared with anyone on your team — no technical knowledge required. Step 2 — Split by Language (Code) The language codes entered in the form are split into individual items — one per language. So if you entered en,hi,es, the workflow creates three separate jobs. Each job carries the video URL, brand name, target language, and clip count. From this point, every language runs through the same steps independently. Step 3 — Submit to WayinVideo AI For each language, the video URL and settings are sent to the WayinVideo API. The request includes clip duration (30–60 seconds), HD 720p resolution, 9:16 vertical ratio for social media, AI reframing enabled, and captions set to display translated text in the target language. WayinVideo processes the video and returns a task ID used to track that specific job. Step 4 — Wait 45 Seconds The workflow pauses for 45 seconds to give WayinVideo time to process the video before the first status check. This prevents unnecessary requests being sent too early when results are not yet available. Step 5 — Poll for Results The workflow calls the WayinVideo results endpoint using the task ID from Step 3. It checks: "Are the clips ready?" and receives either a completed clips array or a status that indicates processing is still in progress. Step 6 — Check: Status SUCCEEDED? (YES / NO branch) YES** — If the status equals SUCCEEDED, the workflow moves forward to extract and process each clip. NO** — If the job is still processing, the workflow routes to a 30-second retry wait, then loops back to Step 5 and polls again. This continues automatically until the clips are ready. > ⚠️ Infinite Loop Risk: If WayinVideo never returns a SUCCEEDED status — due to an invalid video URL, a private video, or an API error — this loop will run forever. Consider adding a retry counter to stop the loop after a set number of attempts and send an error alert instead. Step 7 — Wait 30 Seconds (Retry) When clips are not ready yet, the workflow pauses for 30 seconds before polling again. This gap prevents hitting WayinVideo's API rate limits during the retry loop. Step 8 — Extract Each Clip (Code) Once the status is SUCCEEDED, a code step reads the clips data and splits it into individual items — one per clip. Each item includes the clip title, export download link, AI score, tags, description, and start/end timestamps in milliseconds. Step 9 — Download Each Clip File For each clip, the workflow fetches the video file from WayinVideo's export link and downloads the binary file into n8n's memory, ready to be saved. Step 10 — Upload to Google Drive Each downloaded clip file is uploaded to your Google Drive folder. The file is named using the AI-generated clip title from Step 8, so every clip arrives in Drive with a clean, descriptive, ready-to-use filename. The final result is a Google Drive folder containing all short clips — organised by AI-generated titles — in every language you requested, ready to publish. Key Features ✅ Multilingual in one run — Enter multiple language codes and the workflow generates a full clip set per language automatically — no need to run it separately for each language ✅ Translated captions baked in — Subtitles are translated and embedded at the WayinVideo stage — clips arrive ready to post with no extra editing ✅ 9:16 vertical format — Every clip is automatically reframed for TikTok, Instagram Reels, and YouTube Shorts — no manual cropping required ✅ AI clip scoring — WayinVideo ranks clips by engagement potential, so you always get the most shareable moments first ✅ Auto-retry polling — The workflow keeps checking until clips are ready — you don't need to monitor or manually re-run anything ✅ Smart file naming — Each clip in Drive is named using its AI-generated title, not a random ID — your folder stays organised and client-ready ✅ Batch processing — Multiple clips per language are all downloaded and uploaded in a single run ✅ Team-friendly form input — Anyone on your team can submit a video job through the web form — no access to n8n needed Customisation Options Generate shorter clips for TikTok In node "3. WayinVideo — Submit Clipping Task", change target_duration from DURATION_30_60 to DURATION_15_30 to generate 15–30 second clips better suited for TikTok's shorter format. Switch to square format for Twitter/LinkedIn In the same Submit node, change ratio from RATIO_9_16 to RATIO_1_1 to produce square clips optimised for Twitter and LinkedIn feeds. Add Slack or email notifications Insert a Slack or Gmail node after "10. Google Drive — Upload Clip" to automatically ping your team with a message and the Drive folder link every time a new batch of clips is saved. Log clip metadata to Google Sheets After the upload step, add a Google Sheets "Append Row" node to record each clip's title, score, tags, language, and Drive link in a spreadsheet — useful for content calendars and client reporting. Sort clips into language-specific subfolders Pass the target_lang value from Step 2 into the folder selection in "10. Google Drive — Upload Clip" to automatically save each language's clips into its own subfolder (e.g. /clips/hi/, /clips/es/). Add a retry limit to prevent infinite loops Add a Set node before the retry wait to track a counter. Add a second IF node to check if the counter exceeds 8–10 attempts — if it does, route to a Gmail or Slack node to send an error alert and stop the loop. Troubleshooting API key not working: Check that you replaced YOUR_WAYINVIDEO_API_KEY in both node "3. WayinVideo — Submit Clipping Task" and node "5. WayinVideo — Get Clips Result" Confirm your WayinVideo account is active and the key has not expired Make sure there are no extra spaces before or after the key when pasting Workflow stuck in the polling loop: Check that the video URL is publicly accessible — private, age-restricted, or geo-blocked videos will not process in WayinVideo Open the output of node "5. WayinVideo — Get Clips Result" and inspect the raw response to see if WayinVideo returned an error message If clips never arrive, the workflow will loop indefinitely — fix the video URL and re-run Wrong language codes entered: Use standard ISO 639-1 two-letter codes only: en, hi, es, fr, pt, ar, de, ja, zh, ko, ru, id Do not use full language names like "Hindi" or "Spanish" — these will not be recognised by the WayinVideo API Separate codes with commas and no spaces (e.g. en,hi,es) or with spaces after commas — the workflow trims them automatically Google Drive upload failing: Make sure the Google OAuth2 credential in "10. Google Drive — Upload Clip" is connected and not expired — reconnect it if needed Confirm the folder ID is correct: it should be just the ID string (e.g. 1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs), not the full URL Check that your Google account has write permission for that specific folder Form not triggering the workflow: Make sure the workflow is set to Active — it will not accept form submissions in inactive mode Copy the form URL directly from node "1. Form — Video URL + Languages" by clicking the node and finding the production URL If testing inside n8n, use the production URL, not the test URL Support Need help setting this up or want a custom version built for your team or agency? 📧 Email:info@isawow.com 🌐 Website:https://isawow.com
by vinci-king-01
Breaking News Aggregator with SendGrid and PostgreSQL ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Operational SendGrid account PostgreSQL database accessible from n8n Basic knowledge of SQL for table creation Required Credentials ScrapeGraphAI API Key** – Enables web scraping and parsing SendGrid API Key** – Sends email notifications PostgreSQL Credentials** – Host, port, database, user, and password Specific Setup Requirements | Resource | Requirement | Example Value | |----------------|------------------------------------------------------------------|------------------------------| | PostgreSQL | Table with columns: id, title, url, source, published_at| news_updates | | Allowed Hosts | Outbound HTTPS access from n8n to target sites & SendGrid endpoint| https://*.gov, https://api.sendgrid.com | | Keywords List | Comma-separated compliance terms to filter results | GDPR, AML, cybersecurity | How it works This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Key Steps: Schedule Trigger**: Runs once daily (or at any chosen interval). ScrapeGraphAI**: Crawls predefined regulatory URLs and returns structured article data. Code (JS)**: Filters results by keywords and formats them. SplitInBatches**: Processes articles in manageable chunks to avoid timeouts. If Node**: Checks whether each article already exists in the database. PostgreSQL**: Inserts only new articles into the news_updates table. Set Node**: Generates an email-friendly HTML summary. SendGrid**: Dispatches the compiled summary to compliance stakeholders. Set up steps Setup Time: 15-20 minutes Install ScrapeGraphAI Node: From n8n, go to “Settings → Community Nodes → Install”, search “ScrapeGraphAI”, and install. Create PostgreSQL Table: CREATE TABLE news_updates ( id SERIAL PRIMARY KEY, title TEXT, url TEXT UNIQUE, source TEXT, published_at TIMESTAMP ); Add Credentials: Navigate to “Credentials”, add ScrapeGraphAI, SendGrid, and PostgreSQL credentials. Import Workflow: Copy the JSON workflow, paste into “Import from Clipboard”. Configure Environment Variables (optional): REG_NEWS_KEYWORDS, SEND_TO_EMAILS, DB_TABLE_NAME. Set Schedule: Open the Schedule Trigger node and define your preferred cron expression. Activate Workflow: Toggle “Active”, then click “Execute Workflow” once to validate all connections. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow at the defined interval. ScrapeGraphAI** – Scrapes and parses news listings into JSON. Code** – Filters articles by keywords and normalizes timestamps. SplitInBatches** – Prevents database overload by batching inserts. If** – Determines whether an article is already stored. PostgreSQL** – Executes parameterized INSERT statements. Set** – Builds the HTML email body. SendGrid** – Sends the daily digest email. Data Flow: Schedule Trigger → ScrapeGraphAI → Code → SplitInBatches → If → PostgreSQL → Set → SendGrid Customization Examples Change Keyword Filtering // Code Node snippet const keywords = ['GDPR','AML','SOX']; // Add or remove terms item.filtered = keywords.some(k => item.title.includes(k)); return item; Switch to Weekly Digest { "trigger": { "cronExpression": "0 9 * * 1" // Every Monday at 09:00 } } Data Output Format The workflow outputs structured JSON data: { "title": "Data Privacy Act Amendment Passed", "url": "https://regulator.gov/news/1234", "source": "regulator.gov", "published_at": "2024-06-12T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI node not found – Install the community node and restart n8n. Duplicate key error in PostgreSQL – Ensure the url column is marked UNIQUE to prevent duplicates. Emails not sending – Verify SendGrid API key and check account’s daily limit. Performance Tips Limit initial scrape URLs to fewer than 20 to reduce run time. Increase SplitInBatches size only if your database can handle larger inserts. Pro Tips: Use environment variables to manage sensitive credentials securely. Add an Error Trigger node to catch and log failures for auditing purposes. Combine with Slack or Microsoft Teams nodes to push instant alerts alongside email digests.
by vinci-king-01
Social Media Sentiment Analysis Dashboard with AI and Real-time Monitoring 🎯 Target Audience Social media managers and community managers Marketing teams monitoring brand reputation PR professionals tracking public sentiment Customer service teams identifying trending issues Business analysts measuring social media ROI Brand managers protecting brand reputation Product managers gathering user feedback 🚀 Problem Statement Manual social media monitoring is overwhelming and often misses critical sentiment shifts or trending topics. This template solves the challenge of automatically collecting, analyzing, and visualizing social media sentiment data across multiple platforms to provide actionable insights for brand management and customer engagement. 🔧 How it Works This workflow automatically monitors social media platforms using AI-powered sentiment analysis, processes mentions and conversations, and provides real-time insights through a comprehensive dashboard. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain real-time monitoring AI-Powered Sentiment Analysis - Uses advanced NLP to analyze sentiment, emotions, and topics Multi-Platform Integration - Monitors Twitter, Reddit, and other social platforms Real-time Alerting - Sends notifications for critical sentiment changes or viral content Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the mention was recorded | "2024-01-15T10:30:00Z" | | platform | String | Social media platform | "Twitter" | | username | String | User who posted the content | "@john_doe" | | content | String | Full text of the post/comment | "Love the new product features!" | | sentiment_score | Number | Sentiment score (-1 to 1) | 0.85 | | sentiment_label | String | Sentiment classification | "Positive" | | emotion | String | Primary emotion detected | "Joy" | | topics | Array | Key topics identified | ["product", "features"] | | engagement | Number | Likes, shares, comments | 1250 | | reach_estimate | Number | Estimated reach | 50000 | | influence_score | Number | User influence metric | 0.75 | | alert_priority | String | Alert priority level | "High" | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Slack workspace for notifications (optional) Social media API access (Twitter, Reddit, etc.) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for sentiment analysis data Configure the sheet name (default: "Sentiment Analysis") 4. Configure Social Media Monitoring Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for social media platforms you want to monitor Customize the user prompt to extract specific sentiment data Set up keywords, hashtags, and brand mentions to track 5. Set up Notification Channels Configure Slack webhook or API credentials Set up email service credentials for alerts Define sentiment thresholds for different alert levels Test notification delivery 6. Configure Schedule Trigger Set monitoring frequency (every 15 minutes, hourly, etc.) Choose appropriate time zones for your business hours Consider social media platform rate limits 7. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test sentiment analysis with sample content 🔄 Workflow Customization Options Modify Monitoring Targets Add or remove social media platforms Change keywords, hashtags, or brand mentions Adjust monitoring frequency based on platform activity Extend Sentiment Analysis Add more sophisticated emotion detection Implement topic clustering and trend analysis Include influencer identification and scoring Customize Alert System Set different thresholds for different sentiment levels Create tiered alert systems (info, warning, critical) Add sentiment trend analysis and predictions Output Customization Add data visualization and reporting features Implement sentiment trend charts and graphs Create executive dashboards with key metrics Add competitor sentiment comparison 📈 Use Cases Brand Reputation Management**: Monitor and respond to brand mentions Crisis Management**: Detect and respond to negative sentiment quickly Customer Feedback Analysis**: Understand customer satisfaction and pain points Product Launch Monitoring**: Track sentiment around new product releases Competitor Analysis**: Monitor competitor sentiment and engagement Influencer Identification**: Find and engage with influential users 🚨 Important Notes Respect social media platforms' terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update your monitoring keywords and parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider privacy implications and data protection regulations 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Sentiment analysis errors: Review the Code node's JavaScript logic Rate limiting: Adjust monitoring frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Social media platform API documentation Sentiment analysis best practices and guidelines
by Daniel
Transform any website into a structured knowledge repository with this intelligent crawler that extracts hyperlinks from the homepage, intelligently filters images and content pages, and aggregates full Markdown-formatted content—perfect for fueling AI agents or building comprehensive company dossiers without manual effort. 📋 What This Template Does This advanced workflow acts as a lightweight web crawler: it scrapes the homepage to discover all internal links (mimicking a sitemap extraction), deduplicates and validates them, separates image assets from textual pages, then fetches and converts non-image page content to clean Markdown. Results are seamlessly appended to Google Sheets for easy analysis, export, or integration into vector databases. Automatically discovers and processes subpage links from the homepage Filters out duplicates and non-HTTP links for efficient crawling Converts scraped content to Markdown for AI-ready formatting Categorizes and stores images, links, and full content in a single sheet row per site 🔧 Prerequisites Google account with Sheets access for data storage n8n instance (cloud or self-hosted) Basic understanding of URLs and web links 🔑 Required Credentials Google Sheets OAuth2 API Setup Go to console.cloud.google.com → APIs & Services → Credentials Click "Create Credentials" → Select "OAuth client ID" → Choose "Web application" Add authorized redirect URIs: https://your-n8n-instance.com/rest/oauth2-credential/callback (replace with your n8n URL) Download the client ID and secret, then add to n8n as "Google Sheets OAuth2 API" credential type During setup, grant access to Google Sheets scopes (e.g., spreadsheets) and test the connection by listing a sheet ⚙️ Configuration Steps Import the workflow JSON into your n8n instance In the "Set Website" node, update the website_url value to your target site (e.g., https://example.com) Assign your Google Sheets credential to the three "Add ... to Sheet" nodes Update the documentId and sheetName in those nodes to your target spreadsheet ID and sheet name/ID Ensure your sheet has columns: "Website", "Links", "Scraped Content", "Images" Activate the workflow and trigger manually to test scraping 🎯 Use Cases Knowledge base creation: Crawl a company's site to aggregate all content into Sheets, then export to Notion or a vector DB for internal wikis AI agent training: Extract structured Markdown from industry sites to fine-tune LLMs on domain-specific data like legal docs or tech blogs Competitor intelligence: Build dossiers by crawling rival websites, separating assets and text for SEO audits or market analysis Content archiving: Preserve dynamic sites (e.g., news portals) as static knowledge dumps for compliance or historical research ⚠️ Troubleshooting No links extracted: Verify the homepage has tags; test with a simple site like example.com and check HTTP response in executions Sheet update fails: Confirm column names match exactly (case-sensitive) and credential has edit permissions; try a new blank sheet Content truncated: Google Sheets limits cells to ~50k chars—adjust the .slice(0, 50000) in "Add Scraped Content to Sheet" or split into multiple rows Rate limiting errors: Add a "Wait" node after "Scrape Links" with 1-2s delay if the site blocks rapid requests
by ConnectSafely
Extract LinkedIn Group Members to Google Sheets - Premium & Verified Only using ConnectSafely.AI API Who's it for This workflow is built for sales professionals, community managers, recruiters, and growth marketers who want to extract high-quality leads from LinkedIn groups without the manual grind. Perfect for anyone who needs to identify decision-makers, founders, and serious professionals within large LinkedIn communities. If you're running targeted outreach campaigns, building prospect lists, researching competitor communities, or looking to connect with verified industry leaders, this automation filters the noise and delivers only Premium and Verified members straight to your spreadsheet. How it works The workflow automates LinkedIn group member extraction by combining pagination handling with intelligent filtering through ConnectSafely.ai's API. The process flow: Initializes pagination variables with your target group ID Fetches group members in batches of 50 via ConnectSafely.ai API Filters each batch for Premium OR Verified members only Extracts profile data (name, headline, follower count, profile URL, etc.) Checks if more pages exist and loops back automatically Once complete, splits all members into individual items Appends or updates records in Google Sheets (deduplicates by Profile ID) The pagination loop handles groups of any size - whether 500 or 50,000 members. Setup steps Step 1: Prepare Your Google Sheet Structure your Google Sheet with the following columns: | Column Name | Description | Required | |------------|-------------|----------| | Profile ID | Unique LinkedIn profile identifier | Yes | | First Name | Member's first name | Yes | | Last Name | Member's last name | Yes | | Full Name | Combined first and last name | Yes | | Headline | Professional headline/tagline | Yes | | Public Identifier | LinkedIn username | Yes | | Profile URL | Direct link to LinkedIn profile | Yes | | Follower Count | Number of followers | Yes | | Is Premium | Premium subscription status | Yes | | Is Verified | Verification badge status | Yes | | Relationship Status | Connection degree (1st, 2nd, 3rd) | Yes | Pro Tip: The workflow uses "Append or Update" operation with Profile ID as the matching column, so running it multiple times won't create duplicates. Step 2: Configure ConnectSafely.ai API Credentials Obtain API Key Log into ConnectSafely.ai Dashboard Navigate to Settings → API Keys Generate a new API key Add Bearer Auth Credential in n8n Go to Credentials in n8n Click Add Credential → Header Auth or Bearer Auth Paste your ConnectSafely.ai API key Save the credential This credential is used by the "Fetch Group Members" HTTP Request node. Step 3: Configure Google Sheets Integration 3.1 Connect Google Sheets Account Go to Credentials → Add Credential → Google Sheets OAuth2 Follow the OAuth flow to connect your Google account Grant access to Google Sheets 3.2 Configure "Append to Google Sheets" Node Open the Append to Google Sheets node Select your Google Sheets credential Enter your Document ID (from the sheet URL) Select the Sheet Name Configure column mapping to match the extracted fields Set Matching Column to Profile ID for deduplication Step 4: Set Your Target LinkedIn Group Open the Initialize Pagination node Locate the groupId variable in the code Replace "9357376" with your target group ID Finding Your Group ID: Go to your LinkedIn group Look at the URL: linkedin.com/groups/XXXXXXX/ The numbers are your group ID // Change this value to your target group groupId: "9357376", // Replace with your group ID Step 5: Test the Workflow Click the Start Workflow manual trigger node Click Test Workflow Verify: API returns member data correctly Filtering captures only Premium/Verified members Pagination loops for additional pages (if applicable) Google Sheets populates with extracted data Customization Filter Criteria Edit the filter logic in the Process & Filter Members node to adjust: Premium Only**: Remove the isVerified checks to capture only Premium subscribers Verified Only**: Remove the isPremium checks to capture only Verified profiles All Members**: Remove the filter entirely to extract everyone (modify the return statement) Minimum Followers**: Add a follower count threshold for influencer targeting // Example: Filter for Premium members with 1000+ followers const filteredMembers = members.filter(member => { const isPremium = member.isPremium === true; const hasMinFollowers = member.followerCount >= 1000; return isPremium && hasMinFollowers; }); Batch Size Default**: 50 members per API request Adjust**: Modify the count value in Initialize Pagination node Note**: 50 is the maximum allowed by the API Additional Fields The API returns more fields than extracted by default. Edit the Process & Filter Members node to include: creator - Whether they're a LinkedIn creator badges - Full list of profile badges fetchedAt - Timestamp of extraction Use Cases Sales Prospecting**: Build targeted prospect lists from industry-specific groups with verified decision-makers Competitor Research**: Analyze who's active in competitor communities and their professional backgrounds Influencer Identification**: Find Premium creators and verified professionals for partnership opportunities Recruiting**: Source passive candidates who are active in professional development groups Event Marketing**: Identify engaged professionals in niche communities for webinar and conference promotion Content Strategy**: Research headlines and titles to understand what resonates in your industry Troubleshooting Common Issues & Solutions Issue: Empty results returned Solution**: Verify you're a member of the target group; API can only access groups you've joined Issue: "401 Unauthorized" errors Solution**: Check that your ConnectSafely.ai API key is valid and the Bearer Auth credential is properly configured Issue: Pagination loop seems infinite Solution**: This is expected behavior until hasMore returns false; large groups may take several minutes to fully process Issue: Duplicate entries in Google Sheets Solution**: Ensure the "Append or Update" operation is selected with Profile ID as the matching column Issue: Missing data in certain columns Solution**: Not all profiles have complete data; the workflow handles null values gracefully Issue: Google Sheets not updating Solution**: Verify OAuth credentials are valid and the sheet/document IDs are correctly configured Documentation & Resources Official Documentation ConnectSafely.ai Docs**: https://connectsafely.ai/docs API Reference**: Available in ConnectSafely.ai dashboard n8n Google Sheets Node**: https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.googlesheets/ Support Channels Email Support**: support@connectsafely.ai Documentation**: https://connectsafely.ai/docs Custom Workflows**: Contact us for custom automation Connect With Us Stay updated with the latest automation tips, LinkedIn strategies, and platform updates: LinkedIn**: linkedin.com/company/connectsafelyai YouTube**: youtube.com/@ConnectSafelyAI-v2x Instagram**: instagram.com/connectsafely.ai Facebook**: facebook.com/connectsafelyai X (Twitter)**: x.com/AiConnectsafely Bluesky**: connectsafelyai.bsky.social Mastodon**: mastodon.social/@connectsafely Need Custom Workflows? Looking to build sophisticated LinkedIn automation workflows tailored to your business needs? Contact our team for custom automation development, strategy consulting, and enterprise solutions. We specialize in: Multi-channel engagement workflows AI-powered personalization at scale Lead scoring and qualification automation CRM integration and data synchronization Custom reporting and analytics pipelines
by isaWOW
Description An AI-powered content rewriter that maintains exact character counts line-by-line while rewriting web pages for SEO. Fetches reference URL content, preserves layout-critical formatting, and logs detailed comparisons to Google Sheets—perfect for agencies rewriting competitor content while maintaining design consistency. What this workflow does This workflow solves a unique SEO challenge: rewriting web content while preserving exact character counts on every single line. When you need to rewrite a competitor's page or update your own content without breaking the layout, this automation fetches the reference URL, converts it to Markdown, and uses GPT-4.1 to rewrite marketing text, headings, and CTAs while maintaining the exact same character count as the original—down to the letter. The AI intelligently decides what to keep unchanged (form labels), what to skip entirely (URLs, footers), and what to rewrite (marketing content). A second AI agent then compares the original and rewritten versions line-by-line, verifying character counts and logging everything to Google Sheets for quality control. Perfect for SEO agencies, content teams, and web developers who need to rewrite content without disturbing page layouts, CSS styling, or design templates that depend on specific text lengths. Key features Exact character count preservation: Every rewritten line matches the original's character count precisely—no approximations, no +/- 1 character deviations. If the original has 47 characters, the rewrite will have exactly 47 characters. Smart content classification: The AI automatically categorizes each line into three actions: KEEP (form labels like Name, Email, Phone remain unchanged), SKIP (URLs and footers excluded entirely from output), or REWRITE (marketing content, headings, CTAs rewritten with exact length). Line-by-line comparison analysis: A second AI agent compares original vs. rewritten content, creates structured JSON showing each change, verifies character and word counts match, and flags any discrepancies. Google Sheets quality tracking: All comparisons logged to Google Sheets with columns for Old Text, AI Suggested Text, Old Text Length, and AI Suggested Text Length—enabling manual review and quality control. Layout-safe rewriting: Preserves Markdown structure, maintains spacing, keeps technical elements intact—ensures the rewritten content fits perfectly into existing page designs and CSS frameworks. Form-based workflow: Simple form interface with Client ID, Service Page Keyword, Instructions, and Reference URL—no coding needed to submit rewrite requests. Dual GPT-4.1 agents: Uses two independent AI agents (rewriter and comparator) with GPT-4.1 for maximum accuracy and quality verification at every step. How it works 1. Submit rewriting request via form User fills a simple form with: Client ID:** Project identifier for tracking Service Page Keyword:** Target SEO keyword (optional) Instruction:** Specific rewriting guidance (e.g., "make it more professional") Reference URL:** The webpage to fetch and rewrite 2. Fetch reference webpage content The workflow sends an HTTP POST request to the provided URL and retrieves the complete HTML source code of the page. 3. Convert HTML to Markdown The HTML is converted to clean Markdown format, removing unnecessary tags while preserving structure, headings, lists, and text content. This makes it easier for the AI to process line-by-line. 4. AI rewrites content with exact character matching The first AI Agent (powered by GPT-4.1) receives the Markdown content and processes it with ULTRA-STRICT rules: Character counting rules: Counts everything: letters, spaces, numbers, symbols, punctuation Spaces are characters Line breaks don't count Case doesn't affect count Decision logic for each line: KEEP AS-IS:** Simple form labels (Name, Email, Phone, Message), generic system messages, technical single words SKIP COMPLETELY:** All URLs (https://, http://, www.), all footers (copyright, legal links, disclaimers), navigation URLs, image paths MUST REWRITE:** Headings, marketing text, CTAs, service descriptions, menu items Character-matching techniques: Too short → Add words, expand contractions Too long → Use shorter synonyms, use contractions Exact match → Swap equal-length words Verification checklist: Every rewritten line matches original character count All simple labels kept exactly All URLs skipped All footers skipped (no rewrite or modification) Markdown structure preserved If even 1 line has mismatched characters, the AI retries until it's perfect. 5. Compare original vs. rewritten content A second AI Agent (also GPT-4.1) compares the original Markdown input with the rewritten output: Breaks down both texts into individual sentences or meaningful phrases Matches each line from original with its corresponding rewritten version Verifies character and word counts match for each pair Notes any skipped content (URLs, footers) with markers like "[SKIPPED - Footer/URL]" Outputs structured JSON with comparison array 6. Parse comparison to structured format The Structured Output Parser ensures the comparison JSON is valid and properly formatted: { "comparisons": [ { "old_text": "exact sentence from original", "ai_suggested_text": "corresponding sentence from rewritten version", "Old_text_count": "47", "ai_suggested_text_Count": "47" } ] } 7. Split comparison into individual rows The Split Out node takes the comparisons array and creates a separate item for each comparison, preparing it for Google Sheets insertion. 8. Log comparison to Google Sheets Each comparison is written to Google Sheets as a new row with columns: Old Text:** Original line AI Suggested Text:** Rewritten line Old Text Length:** Character count of original AI Suggested Text Length:** Character count of rewritten This creates a complete audit trail for manual review and quality verification. Setup requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for comparison tracking OpenAI API key (GPT-4.1 and GPT-4o-mini access) Target website URL to rewrite Estimated setup time: 20–25 minutes Configuration steps 1. Connect Google Sheets In n8n: Credentials → Add credential → Google Sheets OAuth2 API Complete OAuth authentication Create a tracking Google Sheet with columns: Old Text AI Suggested Text Old Text Length AI Suggested Text Length Open "Log Comparison to Google Sheets" node Select your Google Sheet and correct sheet tab Verify column mapping matches your sheet structure 2. Add OpenAI API credentials Get API key: https://platform.openai.com/api-keys In n8n: Credentials → Add credential → OpenAI API Paste your API key Configure three OpenAI Chat Model nodes: "OpenAI GPT-4.1 Rewriting Model": Set to gpt-4.1, timeout 100000ms "OpenAI GPT-4.1 Comparison Model": Set to gpt-4.1, timeout 100000ms "OpenAI GPT-4o-mini Parser Model": Set to gpt-4o-mini Verify all three nodes use your OpenAI credential 3. Copy form URL Open "Submit Content Rewriting Request" node Copy the Form URL Share this URL with your team or clients for submitting rewrite requests 4. Customize AI rewriting rules (optional) Open "Rewrite Content with Exact Character Count" node and edit the system message to: Add more content types to KEEP unchanged Define additional SKIP rules (e.g., specific footer patterns) Adjust rewriting tone (formal, casual, technical) Modify character-matching techniques 5. Test the workflow Activate the workflow (toggle to Active) Open the form URL Fill in test data: Client ID: TEST_001 Service Page Keyword: SEO services Instruction: Make it more professional Reference URL: https://example.com Submit the form Wait 1-3 minutes for processing (depends on content length) Check Google Sheets for comparison results Verify: All rewritten lines have matching character counts URLs and footers are skipped Form labels remain unchanged Marketing content is rewritten 6. Review and refine Open your Google Sheets comparison log Review the Old Text vs. AI Suggested Text columns Check character count columns match If any lines failed validation, review the AI's decision logic Adjust the system prompt if needed for your specific use case Use cases SEO agencies rewriting competitor content: Analyze top-ranking competitor pages and rewrite them for your clients while preserving the exact layout that works. Character-count matching ensures the rewritten content fits perfectly into the same design templates. Web designers updating legacy sites: Modernize old website copy without breaking existing CSS layouts that depend on specific text lengths. Maintain pixel-perfect designs while refreshing the message. Content teams A/B testing variations: Create multiple versions of the same page with different wording but identical character counts. Test messaging changes without layout shifts affecting results. Translation agencies adapting content: When translating from one language to another requires matching specific character limits (billboards, app interfaces, fixed-width layouts), this workflow helps maintain constraints. E-commerce product descriptions: Rewrite product copy for different brands or markets while keeping descriptions at exact character counts required by platform templates or PIM systems. Landing page optimization: Test different headlines, CTAs, and value propositions while ensuring each variation maintains the original's character count—preventing layout breaks on mobile or desktop. Customization options Adjust character count strictness If your use case allows minor variations (+/- 2-3 characters), edit the rewriting agent's system message: Change "EXACT SAME CHARACTER COUNT" to "within 3 characters" Update verification rules to accept small deviations Modify the comparison agent to flag only major discrepancies Add more content types to KEEP In the rewriting agent system message, expand the "KEEP AS-IS" list: Company names Product names Legal disclaimers Technical specifications Date formats Change output format Instead of Google Sheets, route comparison data to: Notion database** (via HTTP Request to Notion API) Airtable** (via Airtable node) Email report** (via Email node with HTML table) Slack notification** (via Slack node with formatted message) Add batch processing Modify the form to accept multiple URLs at once: Add a Text Area field for URL list (one per line) Insert a Split Out node after form submission Loop through each URL sequentially Aggregate all comparisons into a single Google Sheet Implement approval workflow Add human review before finalizing rewrites: After comparison analysis, send results to project manager via email Include approve/reject buttons (using n8n Webhook URLs) Only log approved rewrites to final Google Sheet Store rejected versions in a separate "Needs Revision" sheet Troubleshooting Character counts don't match AI struggling with specific lines:** Some sentences are difficult to rewrite at exact length. Check the Google Sheets log to identify which lines failed. Manual rewrite may be needed for complex technical content. Special characters counted wrong:** Ensure the AI is counting all special characters, emojis, and Unicode symbols. Edit the system prompt to emphasize counting everything. Markdown formatting interfering:* Markdown syntax (*, ##, etc.) shouldn't be counted as characters. Verify the AI understands to count only visible text. URLs or footers appearing in rewritten output SKIP rules not working:** The AI didn't recognize a URL or footer pattern. Edit the system prompt to add specific patterns to the SKIP list (e.g., "Privacy Policy", "Terms of Service"). Footer detection failed:** Some footers don't have obvious markers. Add keyword patterns to the SKIP rules (e.g., "© 2024", "All rights reserved"). Comparison agent fails to parse output Invalid JSON format:** The comparison agent must output pure JSON with no markdown. Check the "Compare Original vs Rewritten Content" node's system message emphasizes "Output ONLY the JSON object—no markdown, no extra text." Structured parser timeout:** Large content with 100+ comparison pairs may exceed timeout. Increase timeout in "OpenAI GPT-4o-mini Parser Model" node or split content into smaller chunks. Google Sheets not updating OAuth expired:** Re-authenticate Google Sheets credentials in n8n. Sheet permissions:** Verify the connected Google account has edit access. Column names mismatch:** Ensure sheet column headers exactly match the node mapping (case-sensitive). Row limit reached:** Google Sheets has a 10 million cell limit. Create a new sheet if approaching limits. Rewriting takes too long Timeout errors:** Large webpages (10,000+ words) may exceed the 100-second timeout. Increase timeout in both GPT-4.1 model nodes or split content into sections. OpenAI API rate limits:** If processing many requests simultaneously, you may hit rate limits. Add a delay between submissions or upgrade OpenAI plan. Form labels being rewritten incorrectly AI not recognizing labels:** Add specific examples to the KEEP list in the system prompt (e.g., "Full Name", "Phone Number", "Email Address"). Context confusion:** If form labels are embedded in marketing text, the AI may rewrite them. Improve the prompt to emphasize preserving all form-related text. Resources n8n documentation OpenAI GPT-4 API Google Sheets API Markdown specification n8n Form Trigger n8n Structured Output Parser Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Jack
Automated real estate monitoring with MrScraper Who’s it for Real estate investors, agents, proptech teams, and business intelligence professionals who need continuous property price monitoring with automated data collection and AI-ready outputs for reporting and decision-making. What it does This workflow automatically collects real estate listing data from Realtor, including key fields such as property price, listing title, and location. It is designed to capture the full set of available real estate data for the selected search criteria. The workflow consolidates the scraped data into a single structured dataset, removes inconsistencies, and formats the output into a clean CSV file. This makes it easy to import into spreadsheets, databases, or analytics tools for further analysis, reporting, or automation. How it works Accepts a real estate search URL from the same domain**, allowing the workflow to be reused across different locations and filters. Collects and extracts configurable real estate fields** (such as price, title, location, and other listing details) based on your data requirements. Automatically navigates through all related real estate result pages** to ensure the extracted dataset is complete. Converts the final aggregated dataset into a clean CSV** file for easy use in analysis, reporting, or downstream automation. How to set up 1. Set up your scraper Create two manual scrapers on the MrScraper Platform: One scraper to collect the number of results page One scraper to extract detailed data from each listing URL This separation ensures the workflow can scale and be reused efficiently. 2. Customize extracted data In the data extraction scraper, customize the fields according to your needs (for example: price, title, location, etc.). If you need help automating or configuring the extraction logic, you can contact the MrScraper team for assistance. 3. Configure API credentials MrScraper: Generate your API token from Manual API Access or your profile page. This token allows n8n to trigger and retrieve data from your scrapers. Gmail OAuth2: Required to send the extracted CSV results via email. Google Drive OAuth2 (optional): Used to automatically upload the CSV output to Google Drive for storage and sharing. Requirements MrScraper account to create and manage the scrapers Gmail account to receive the extracted results via email Google Drive account to store the extracted results (optional) How to customize the workflow Change scraper result – Customize the data extraction results in the scraper you have created. Output file – Can be exported to CSV, JSON, XLSX, or other formats. Integrate with other programs – Add nodes that connect with the desired programs.
by Fahmi Fahreza
End-to-End Local Business Lead Generation, Enrichment and Outreach Pipeline with Decodo This workflow finds local business leads on Google Maps, enriches them with website and AI analysis, and sends personalized cold emails to qualified prospects automatically. Who’s it for? This template is for agencies, freelancers, B2B growth teams, and sales operators who want a repeatable outbound lead generation system for local business prospecting. How it works The workflow starts on a schedule and loads the search, scoring, and sender settings from one configuration node. Decodo scrapes Google Maps search results for businesses that match your niche and city. The workflow extracts lead details such as business name, address, phone number, website, and rating, then stores each lead in Google Sheets. Each discovered lead is checked for a website. If no website is available, the lead is marked and skipped from enrichment. For leads with a website, Decodo scrapes the site and OpenAI scores the business, summarizes it, and suggests enrichment fields such as business type, pain points, and contact email. The workflow updates the lead in Google Sheets with the enrichment data. Enriched leads are filtered by score threshold, website presence, email availability, and contact status. For each qualified lead, the workflow scrapes more website content and uses OpenAI to write a personalized outreach email. Gmail sends the email, and the lead is marked as contacted in Google Sheets. How to set up Add credentials for Decodo, OpenAI, Google Sheets, and Gmail. Replace the placeholder Google Sheet ID in the configuration node. Create a Leads sheet with all required columns used in the workflow. Update the niche, city, geo, max leads, score threshold, and sender fields. This template uses the Decodo community node, so it is intended for self-hosted n8n only. Replace the placeholder image above with a real workflow screenshot before submission.
by BHSoft
Customer Visit Notification This workflow monitors Google Calendar for events indicating that a customer will visit the company today or the next day, retrieves the required details, and sends reminder notifications to the relevant stakeholders. It also posts a company-wide announcement to ensure proper preparation and a professional reception for the customer. 📌Who is this for? Reception / Administration team Sales / Account owners in charge of customers Management / Related team leaders Security / IT / Logistics (for meeting room, equipment, and check-in preparation) 📌The problem Customer visit information is usually shared manually and can be easily missed. Related staff are not informed in time to prepare. This causes last-minute preparation for reception, meeting rooms, documents, and support. It affects the customer experience and the company’s professional image. 📌How it works When a customer meeting is scheduled, the system records the information (time, customer name, company, person in charge). The system automatically sends notifications to related groups based on timeline: Notify the whole office 1 hour before the visit. Notify related members 24 hours in advance. Notifications can be sent via Email / Slack / Internal chat. 📌Quick setup Required information: N8n Version 2.4.6 Google Calendar OAuth2 API: Client ID, Client Secret Google Sheets OAuth2 API: Client ID, Client Secret Slack App: Bot User OAuth Token Google Sheets will be used to log all notified events. 📌Results Everyone is aware of the customer visit schedule. Teams can proactively prepare meeting rooms, documents, and manpower. Reduce mistakes and missed communication. Improve customer experience and company professionalism. 📌Take it further Customer check-in using QR code / Visitor form Send reminders to prepare documents Store visit history Monthly/quarterly reports for number of customer visits 📌Need help customizing? Contact me for consulting and support: Linkedin / Website
by Rahul Joshi
📊 Description Stop finding out you're out of stock after a customer already tried to buy. This workflow monitors your entire product inventory daily, calculates how fast each SKU is selling, and automatically raises purchase orders to your supplier before you hit zero — all without you opening a spreadsheet. Built for D2C brands and ops teams who are tired of manual stock checks, surprise stockouts, and dead inventory eating up cash. The AI layer adds demand forecasting and tells you exactly what to bundle, discount, or kill every week. What This Workflow Does ⏰ Triggers every morning at 8AM to fetch fresh product and stock data automatically 🧮 Calculates sales velocity per SKU using a 7-day rolling average from your Sales Log 🚦 Flags every SKU as 🔴 Stockout Risk, 🟡 Dead Stock, or 🟢 Healthy and updates Google Sheets 📧 Sends instant email alerts when a SKU is about to run out or has been sitting unsold 📋 Automatically calculates reorder quantity using lead time and buffer stock formula 🏭 Emails a formatted Purchase Order directly to your supplier — no manual drafting 🤖 Sends all dead/slow SKUs to GPT-4o every Sunday for actionable recommendations — bundle, discount, or kill 📊 Delivers a 30-day demand forecast every Sunday based on 90 days of sales history and seasonal context Key Benefits ✅ Never get caught off guard by a stockout again ✅ POs go to suppliers automatically — zero manual work ✅ AI tells you what to do with slow-moving inventory every week ✅ 30-day demand forecast keeps you buying ahead, not reacting ✅ Everything logged in Google Sheets — full visibility at all times ✅ Works with any product catalog — just plug in your Sheet How It Works The workflow runs in 5 stages, each timed 5 minutes apart every morning so data flows cleanly from one stage to the next. Stage 1 — Data Sync (8:00 AM daily) Pulls product data from DummyJSON API (swap with your Shopify or WooCommerce endpoint when going live), simulates daily sales per SKU, and writes everything into your Google Sheets inventory log. Stage 2 — Stock Health Check (8:05 AM daily) Reads the last 7 days of sales, calculates average daily velocity per SKU, and assigns a flag. Anything running out in under 14 days gets flagged 🔴. Anything with zero sales in 7 days gets flagged 🟡. Everything else is 🟢. Flags are written back to your sheet and email alerts fire immediately for anything critical. Stage 3 — Auto Reorder Engine (8:10 AM daily) Picks up every 🔴 SKU and runs it through a reorder formula — (avg daily sales × lead time) + 7-day buffer stock. A properly formatted Purchase Order email goes straight to your supplier. Every PO is logged in the PO Tracker sheet with status, quantity, and date. Stage 4 — Dead Inventory AI Digest (Every Sunday 9:00 AM) All 🟡 SKUs get sent to GPT-4o with your stock levels and sales data. The AI comes back with a recommendation for each one — bundle it, run a discount (with exact percentage), or kill the SKU entirely. You get a clean email grouped by urgency: high, medium, low. Stage 5 — 30-Day Demand Forecast (Every Sunday 9:30 AM) Aggregates the last 90 days of sales per SKU and sends it to GPT-4o with the current month for seasonality context. GPT predicts how many units you'll need in the next 30 days, calculates stock gap, and tells you what to reorder now, stock up on, or reduce. Delivered as a clean email report every Sunday morning. Features Cron-based daily and weekly automation triggers Sales velocity calculation with 7-day and 90-day rolling windows 3-tier SKU health flagging system (🔴 🟡 🟢) Reorder quantity formula with lead time + buffer stock logic Automated PO generation and supplier email dispatch GPT-4o dead inventory strategist with discount recommendations GPT-4o 30-day demand forecasting with seasonality awareness Full Google Sheets logging across Inventory Master, Sales Log, and PO Tracker Gmail-based alerting and weekly digest reports Modular 5-stage architecture — easy to swap data sources Requirements OpenAI API key (GPT-4o access) Google Sheets OAuth2 connection Gmail OAuth2 connection A configured Google Sheet with 3 sheets: Inventory Master, Sales Log, PO Tracker DummyJSON API — free, no key needed (replace with Shopify/WooCommerce for production) Setup Steps Copy the Google Sheet template and grab your Sheet ID from the URL Paste the Sheet ID into all Google Sheets nodes (there are 8 of them) Connect your Google Sheets OAuth2 credentials Connect your Gmail OAuth2 credentials Add your OpenAI API key to both GPT-4o nodes Replace the email address in all Gmail nodes with your own Activate all triggers — the system runs itself from here Target Audience 🛒 D2C brand owners managing inventory without a dedicated ops team 📦 E-commerce operators tired of manual stock checks and surprise stockouts 💼 Operations managers who want a real-time view of inventory health 🤖 Automation agencies building supply chain solutions for e-commerce clients
by Avkash Kakdiya
How it works This workflow monitors new orders from a Postgres database and sends a confirmation email instantly. It then waits until the expected delivery time and continuously checks the delivery status. Once delivered, it uses AI to generate product usage tips and emails them to the customer. After two weeks, it sends personalized complementary product recommendations to drive repeat purchases. Step-by-step Trigger new orders from database** Schedule Trigger – Runs every 2 minutes to check for new orders. Execute a SQL query – Fetches recently created orders from Postgres. Order Placed Ack. – Sends an order confirmation email via Gmail. Track delivery status dynamically** Wait until product get deliver – Pauses workflow until estimated delivery time. Select rows from a table – Retrieves latest order status. If – Checks whether the order is delivered. Wait for a day – Rechecks daily until delivery is confirmed. Send AI-powered product usage tips** Get Product Usage Tips – Uses AI agent to generate helpful tips. Groq Chat Model – Provides LLM capability for content generation. Format AI response – Converts AI output into clean HTML list format. Send Tips to User – Emails tips to the customer. Upsell complementary products after delay** Wait for 2 weeks – Delays follow-up communication. Get Complementary Products – Generates related product suggestions. Groq Chat Model1 – Powers recommendation generation. Code in JavaScript – Formats recommendations into HTML. Send Tips to User1 – Sends upsell email with recommendations. Why use this? Automates full post-purchase lifecycle without manual intervention Improves customer experience with timely and helpful communication Increases repeat purchases through personalized upsell emails Reduces support queries by proactively sending usage guidance Scales easily for growing eCommerce operations
by isaWOW
Automatically scan every PBN site in your Google Sheet, check if any removed client domain is still linked in the live HTML, and log all matches back into your tracking sheet — row by row, hands-free. What this workflow does Managing a network of PBN sites becomes risky the moment a client project gets offboarded. If the old link still exists on a PBN page, it can trigger a Google penalty, waste crawl budget, or simply point to a dead destination. Manually checking each PBN every time a project is removed is slow and error-prone. This workflow solves that completely. It reads your full PBN list from Google Sheets, skips any site you have already checked, then loops through each remaining PBN one at a time. For every site, it fetches the live HTML using an HTTP request, pulls your current list of offboarded project domains from a separate sheet, and runs a domain-matching check directly against that HTML. If a match is found, the workflow writes the detected domain into the "Offboarded Links" column automatically. It then pauses briefly and moves to the next PBN — no manual work required at any stage. Perfect for SEO agencies and link builders who manage multiple PBN networks and need a reliable, automatic way to keep their offboarding records up to date. Key features Automatic HTML scanning: Fetches the live page of every PBN site in real time using HTTP requests, so you are always checking the current version of the page — not a cached or outdated copy. Smart row filtering: Skips all PBN rows that have already been processed. The workflow only picks up rows where the "Offboarded Links" column is still empty, saving time and avoiding duplicate checks. Domain matching against offboarded projects: Pulls the complete list of removed client domains from your "offboard projects" sheet and checks each one against the fetched HTML. If any domain appears anywhere in the page source, it is flagged immediately. One-by-one loop with pause: Processes each PBN individually with a short pause between each iteration. This keeps the workflow stable, avoids rate-limit issues, and makes it easy to monitor progress in real time. Auto-update in Google Sheets: Writes the matched domain directly into your PBNs tracking sheet as soon as it is detected. No copy-paste, no manual entry — your records stay current automatically. How it works Trigger the workflow manually — Click the execute button to start the scan whenever you need to run a check across your PBN network. Read all PBN sites from Google Sheets — The workflow pulls every row from the "PBNs" sheet, which contains your site URLs, row numbers, and the "Offboarded Links" column. Filter out already-processed rows — A code node scans through all rows and removes any row where "Offboarded Links" is already filled in. Only unprocessed PBNs move forward. Loop through each PBN one by one — The remaining rows enter a loop. Each PBN is handled individually, one after the other, to keep the process clean and trackable. Fetch the live HTML of the PBN site — An HTTP request node sends a GET call to the PBN's URL and retrieves the full page HTML. Retries are enabled in case of temporary failures. Read the offboarded project domain list — A separate Google Sheets node pulls all domains from Column A of the "offboard projects" sheet. This is your master list of removed client websites. Match domains against the fetched HTML — A code node compares every domain from the offboarded list against the HTML content. If any domain is found, it is returned as a match. If none are found, the result is set to zero. Prepare the data for the sheet update — A Set node organizes the matched domain, the row number, and the HTML into a clean payload that is ready to write back into Google Sheets. Write the matched domain into the PBNs sheet — The workflow updates the correct row in the "PBNs" sheet by matching the row number and filling in the "Offboarded Links" column with the detected domain. Pause before the next iteration — A short wait is added after each update. This gives the system time to settle and prevents any issues before the loop continues to the next PBN. Setup requirements Tools you will need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for reading and updating PBN data A "PBNs" sheet containing your site URLs and tracking columns A separate "offboard projects" sheet with removed client domains in Column A Estimated setup time: 10–15 minutes Configuration steps 1. Add credentials in n8n: Google Sheets OAuth API — used for reading the PBNs sheet and the offboard projects sheet, and for writing matched results back. 2. Set up the PBNs sheet: Create a Google Sheet with the following columns: Site URL** — The full domain of each PBN site (e.g., example.com) Offboarded Links** — Leave this blank initially. The workflow will auto-fill this column when a match is found. row_number** — A unique number for each row. This is used by the workflow to identify and update the correct row. You can add this manually or use a simple formula. 3. Set up the offboard projects sheet: Create a second sheet (or a new tab) named "offboard projects" with the following structure: Column A** — List every client domain that has been offboarded or removed. Include the full URL format (e.g., https://clientsite.com). The workflow reads this column directly. 4. Update the Google Sheet URL in the workflow: Open the "Read PBN Sites from Sheet" node and paste your Google Sheet URL. Open the "Read Offboarded Project Domains" node and make sure the sheet tab is set to your "offboard projects" sheet. Open the "Write Matched Domain to PBNs Sheet" node and confirm it is pointing to the correct PBNs sheet and tab. 5. Run the workflow: Click the manual trigger to start. Watch the loop process each PBN one by one. Check your PBNs sheet — any matched offboarded domains will appear in the "Offboarded Links" column automatically. Use cases SEO agencies: Quickly audit the entire PBN network after a client project is offboarded. Instead of checking 50+ sites manually, run this workflow once and get a complete report in your Google Sheet within minutes. Link builders: Keep your PBN health records accurate at all times. Every time a project is removed, run this workflow to confirm whether any PBN still carries the old link before it causes issues. Freelance SEO consultants: Offer a professional offboarding audit as part of your service. This workflow handles the technical scanning, and you just need to review the results and take action on any flagged links. In-house SEO teams: Automate a task that used to take hours of manual checking. Run the scan on a regular schedule or whenever a project goes offline, and trust that your tracking sheet stays up to date without any extra effort. Customization options Add more domains to check: Simply add new rows to your "offboard projects" sheet in Column A. The workflow will automatically pick them up the next time it runs — no changes needed inside n8n. Run on a schedule instead of manually: Replace the manual trigger with a Schedule Trigger node. Set it to run daily, weekly, or at any interval that fits your workflow. Add error notifications: Connect a notification node (such as Slack, Email, or Discord) after the loop to alert you when the workflow finishes or when a match is detected. This way you do not need to check the sheet yourself. Reset and re-scan: If you need to re-check all PBN sites from scratch, simply clear the "Offboarded Links" column in your PBNs sheet. The filter node will treat all rows as unprocessed and scan them again on the next run. Troubleshooting HTTP request fails or returns no data: Check that the PBN site URL in your sheet is correct and the site is currently live. The workflow has retry enabled, but if the site is down or blocked, the request will still fail. You can skip those rows manually or add an error-handling branch. No matches found even though a link exists: Make sure the domain in your "offboard projects" sheet matches exactly what appears in the PBN's HTML. For example, if the HTML contains "www.clientsite.com" but your sheet only has "clientsite.com," the match will still work because the code strips "www." However, double-check for any spelling differences or trailing slashes. Offboarded Links column not updating: Verify that the row_number in your PBNs sheet is correct and unique for each row. The update node uses row_number to find and write to the right row. If row_number is missing or duplicated, the update will fail silently. Workflow stops mid-loop: This can happen if Google Sheets returns a rate-limit error. The pause node helps reduce this, but if it still occurs, increase the wait time in the "Pause Before Next Iteration" node or reduce the batch size. Filter skips rows that should be processed: Check if the "Offboarded Links" column has any hidden spaces or formatting. Even an empty-looking cell with a space character will be treated as filled. Clear those cells completely and run the workflow again. Important notes This workflow is designed to be run manually or on a schedule. It does not trigger automatically when a new project is offboarded — you need to initiate the scan yourself. The HTTP request fetches the live HTML of each PBN site. Make sure you have permission or ownership of the sites you are scanning to avoid any access issues. The domain matching is based on simple text inclusion in the HTML source. It will detect domains in links, text content, meta tags, or anywhere else they appear on the page. If you need stricter matching (e.g., only in anchor href tags), the code node can be updated accordingly. Keep your "offboard projects" sheet updated regularly. The workflow only checks domains that are listed there at the time of the scan. Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/