by emmanuelchilaka779
Gather leads into Mailchimp, automate marketing, and sales process.
by ZeroBounce
Form.io to Pipedrive: Advanced ZeroBounce Lead Vetting This workflow automates the transition of new Form.io submissions into Pipedrive, using ZeroBounce for multi-layer validation (Validation + AI Scoring). Results are also output to Google Sheets for review. 🚀 How it Works ⚡ Trigger & Verification: Activates on new Form.io submissions and first checks if an email address is provided. Email present: Proceeds to credit check and validation Email missing: Customer details are added to Rejected output for review with reason 'Email missing'. 💳 Credit Management: Before each ZeroBounce call, the workflow checks your account for sufficient credits to prevent node failures. Success: Proceeds to Stage 1 (Validation). Failure: Customer details are added to Rejected output for retry with reason 'Not enough credits'. ✔️ Stage 1: Email Validation: Validates the email address with ZeroBounce. Valid: Adds the result to Accepted output and creates a person in Pipedrive. Invalid: Logs to a Google Sheet with the validation results and a reason for rejection for review. Catch-all: Proceeds to Stage 2 (Scoring). 🎯 Stage 2: AI Email Scoring: For "Catch-all" emails, the workflow requests ZeroBounce AI Scoring. High Score (>=9): Syncs to Pipedrive. Medium/Low Score: Suppressed and added to a Google Sheet with the assigned score for review. 📤 Output Results: Accepted: Output validation and scoring results to Accepted and Pipedrive Rejected Output validation and scoring results to Rejected. 💡 Key Benefits ✨ Zero-Waste Sync:** Only "Valid" or "High-Scoring" leads reach your CRM. 🛡️ Credit Safety:** Internal checks ensure you never trigger an API call without credits. 📊 Detailed Suppressions:** Every rejected lead is categorized by reason (e.g. Email Missing, Invalid, Low Score, or Insufficient credits). 🧩 Nodes used in this workflow ZeroBounce Form.io Trigger Pipedrive Google Sheets (or alternative e.g. Data Table) 📋 Setup Requirements Form.io:** Connect via credentials to receive "Form Submission" events. ZeroBounce:* Connect via API Key. *Create one here. Pipedrive:** Connect via API Key to create/update people for high-quality leads. Google Sheets:* Connect via OAuth2 to append/update rows in a Google Sheets worksheet. Alternatively, swap these nodes out with any other data storage node e.g. *n8n Data Table* or *Microsoft Excel**. The sheets/tables can be created with the headers: Accepted columns: Email,Name,Accepted At,Accepted Reason,ZB Status,ZB Sub Status,ZB Free Email,ZB Did You Mean,ZB Account,ZB Domain,ZB Domain Age Days,ZB SMTP Provider,ZB MX Found,ZB MX Record,ZB Score Rejected columns Email,Name,Rejected At,Rejected Reason,ZB Status,ZB Sub Status,ZB Free Email,ZB Did You Mean,ZB Account,ZB Domain,ZB Domain Age Days,ZB SMTP Provider,ZB MX Found,ZB MX Record,ZB Score
by ZeroBounce
Shopify to HubSpot: Advanced ZeroBounce Lead Vetting This workflow automates the transition of new Shopify customers into HubSpot, using ZeroBounce for multi-layer validation (Validation + AI Scoring). Results are also output to Google Sheets for review. 🧩 Nodes used in this workflow ZeroBounce Shopify Hubspot Google Sheets (or alternative e.g. Data Table) 💡 Key Benefits ✨ Zero-Waste Sync:** Only "Valid" or "High-Scoring" leads reach your CRM. 🛡️ Credit Safety:** Internal checks ensure you never trigger an API call without credits. 📊 Detailed Suppressions:** Every rejected lead is categorized by reason (e.g. Email Missing, Invalid, Low Score, or Insufficient credits). 🚀 How it Works ⚡ Trigger & Verification: Activates on new Shopify customers and first checks if an email address is provided. Email present: Proceeds to credit check and validation Email missing: Customer details are added to Rejected output for review with reason 'Email missing'. 💳 Credit Management: Before each ZeroBounce call, the workflow checks your account for sufficient credits to prevent node failures. Success: Proceeds to Stage 1 (Validation). Failure: Customer details are added to Rejected output for retry with reason 'Not enough credits'. ✔️ Stage 1: Email Validation: Validates the email address with ZeroBounce. Valid: Adds the result to Accepted output and creates a contact in HubSpot. Invalid: Logs to an n8n Data Table with the validation results and a reason for rejection for review. Catch-all: Proceeds to Stage 2 (Scoring). 🎯 Stage 2: AI Email Scoring: For "Catch-all" emails, the workflow requests ZeroBounce AI Scoring. High Score (>=9): Syncs to HubSpot. Medium/Low Score: Suppressed and added to an n8n Data Table with the assigned score for review. 📤 Output Results: Accepted: Output validation and scoring results to Accepted and Hubspot Rejected Output validation and scoring results to Rejected. 📋 Setup Requirements Shopify:** Connect via OAuth2 to watch for "Customer Created" events (Topic: customers/create). ZeroBounce:* Connect via API Key. *Create one here. HubSpot:** Connect via OAuth2 to create/update contacts for high-quality leads. Google Sheets:* Connect via OAuth2 to append/update rows in a Google Sheets worksheet. Alternatively, swap these nodes out with any other data storage node e.g. *n8n Data Table* or *Microsoft Excel**. The sheets/tables can be created with the headers: Accepted columns: ID,Email,First Name,Last Name,Accepted At,Accepted Reason,ZB Status,ZB Sub Status,ZB Free Email,ZB Did You Mean,ZB Account,ZB Domain,ZB Domain Age Days,ZB SMTP Provider,ZB MX Found,ZB MX Record,ZB Score Rejected columns ID,Email,First Name,Last Name,Rejected At,Rejected Reason,ZB Status,ZB Sub Status,ZB Free Email,ZB Did You Mean,ZB Account,ZB Domain,ZB Domain Age Days,ZB SMTP Provider,ZB MX Found,ZB MX Record,ZB Score
by Asraful Attare
Who’s it for This workflow is built for sales and marketing teams who collect Facebook Lead Ads into Google Sheets and want to automatically sync those leads into Perfex CRM without manual data entry or duplicate records. What it does The workflow checks Google Sheets for new rows where lead_status is marked as CREATED. For each lead, it searches Perfex CRM using the email address via the Rest API module for the Perfex CRM plugin. If the lead already exists, the workflow updates the sheet with a clickable CRM link and marks the row as ADDED. If the lead does not exist, it creates a new lead in Perfex CRM through the REST API and then updates the sheet accordingly. This ensures your CRM stays up to date while preventing duplicate lead creation. How it works A scheduled trigger runs the workflow. Google Sheets retrieves leads with status CREATED. The workflow searches Perfex CRM using the REST API. If found → update Sheet. If not found → create lead in CRM. The sheet is updated with the CRM record link. Requirements Google Sheets account Perfex CRM Rest API module for the Perfex CRM plugin enabled API token configured in n8n credentials (do not hardcode it) How to customize You can modify the lead field mappings, assign leads to different staff members, add tags, or adjust the schedule interval and batch size depending on your lead volume. Support / Contact If you need help setting up or customizing this workflow, feel free to reach out: Email: asrafulattare@aftie.eu WhatsApp: +1 (760) 933-7005 (WhatsApp only)
by Marth
How It Works: The 5-Node Security Flow This workflow efficiently performs a scheduled file integrity audit. 1. Scheduled Check (Cron Node) This is the workflow's trigger. It schedules the workflow to run at a specific, regular interval. Function:** Continuously runs on a set schedule, for example, daily at 3:00 AM. Process:** The Cron node automatically initiates the workflow on its schedule, ensuring consistent file integrity checks without manual intervention. 2. List Files & Checksums (Code Node) This node acts as your static database, defining which files to monitor and their known-good checksums. Function:** Stores the file paths and their verified checksums in a single, easy-to-update array. Process:** It configures the file paths and their valid checksums, which are then passed on to subsequent nodes for processing. 3. Get Remote File Checksum (SSH Node) This node connects to your remote server to get the current checksum of the file being monitored. Function:** Executes a command on your server via SSH. Process:** It runs a command like sha256sum /path/to/file on the server. The current checksum is then captured and passed to the next node for comparison. 4. Checksums Match? (If Node) This is the core detection logic. It compares the newly retrieved checksum from the server with the known-good checksum you stored. Function:** Compares the two checksum values. Process:* If the checksums *do not match**, it indicates a change in the file, and the workflow is routed to the notification node. If they do match, the workflow ends safely. 5. Send Alert (Slack Node) / End Workflow (No-Op Node) These nodes represent the final action of the workflow. Function:** Responds to a detected file change. Process:* If the checksums don't match, the *Slack* node sends a detailed alert with information about the modified file, the expected checksum, and the detected checksum. If the checksums match, the *No-Op** node ends the workflow without any notification. How to Set Up Implementing this essential cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials & Server Before building the workflow, ensure all necessary accounts are set up and their credentials are ready. SSH Credential:* Set up an *SSH credential** in n8n with your server's hostname, port, and authentication method (e.g., private key or password). The SSH user must have permission to run sha256sum on the files you want to monitor. Slack Credential:* Set up a *Slack credential* in n8n and note the *Channel ID** of your security alert channel (e.g., #security-alerts). Get Checksums:* *This is a critical step.** Manually run the sha256sum [file_path] command on your server for each file you want to monitor. Copy and save the generated checksum values—these are the "known-good" checksums you will use as your reference. 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, navigate to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Scheduled Check (Cron):** Set the schedule according to your preference (e.g., daily at 3:00 AM). List Files & Checksums (Code):* Open this node and *edit the filesToCheck array**. Enter your actual server file paths and paste the "known-good" checksums you manually obtained in step 1. Get Remote File Checksum (SSH):* Select your *SSH credential**. Send Alert (Slack):* Select your *Slack credential* and replace YOUR_SECURITY_ALERT_CHANNEL_ID with your actual *Channel ID**. 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Run the workflow manually. Verify that it connects to the server and checks the files without sending an alert (assuming the files haven't changed). Verify:** To test the alert, manually change one of the files on your server and run the workflow again. Check your Slack channel to ensure the alert is sent correctly. Activate:** Once you're confident in its function, activate the workflow. n8n will now automatically audit the integrity of your critical files on the schedule you set.
by Monfort N. Brian | 宁俊
Title Monitor npm package downloads from Telegram with commands, weekly digests, milestone alerts,... Short description Track npm package adoption directly from Telegram. This workflow provides on-demand download stats, automated weekly and monthly reports, growth trends, and milestone alerts using the public npm Downloads API. What it does The workflow combines Telegram commands with scheduled reports to provide quick insights into package usage. Commands /downloads - all-time totals, sorted highest first /weekly - last 7 days per package /status - weekly and all-time combined /trending - this week vs last week /help - available commands Automated reports Weekly digest with package performance Monthly summary comparing usage trends Daily milestone checker that sends alerts when packages cross download thresholds Smart defaults: Typo-tolerant command matching via Levenshtein distance Slash optional, case insensitive Auto-discovers packages from npm registry - new packages appear without code changes Milestone check is silent on days with no crossings - zero noise
by amudhan
Companion workflow for Switch node docs
by Kidlat
This workflow automates the process of extracting and qualifying leads from LinkedIn post comments based on your Ideal Customer Profile (ICP) criteria. It turns LinkedIn engagement into a structured, downloadable list of qualified leads—without manual review. Who’s this for Sales and business development teams generating outbound lead lists Marketing teams running LinkedIn engagement campaigns Recruiters sourcing candidates with specific job titles Operators who want to convert LinkedIn comments into actionable data What problem does this solve Manually reviewing LinkedIn post comments to identify relevant prospects is slow, repetitive, and error-prone. This workflow automates the entire process—from scraping comments to enriching profiles and filtering by ICP—saving hours of manual work and ensuring consistent results. What this workflow does Collects a LinkedIn post URL and ICP criteria via a form Scrapes post comments using Apify (supports up to 1,000 comments) Deduplicates commenters and enriches profiles with LinkedIn data Filters profiles by selected job titles and countries Exports matched leads as a downloadable CSV file How to set up Create an Apify account and generate an API key Add your Apify credentials in n8n (Settings → Credentials → Apify API) Execute the workflow and submit a LinkedIn post URL and ICP criteria Requirements Apify account with API access - Apify offers a free tier with $5 in monthly credits, which is enough to test this workflow on smaller LinkedIn posts How to customize the workflow Update job titles and target countries in the Form Trigger Increase pagination limits to support larger posts Replace CSV export with a CRM, Google Sheets, or database integration
by Robert Breen
🧑💻 Description This workflow checks a Monday.com board/group for items with Status = "Stuck" and sends a Slack alert (e.g., to a user or channel). Great for nudging owners on blocked work without manual chasing. ⚙️ Setup Instructions 1️⃣ Connect Monday.com Node In Monday.com → go to your Admin → API Copy your Personal API Token Docs: Generate Monday API Token In n8n → Credentials → New → Monday.com API Paste your token and save. Open the Get many items node → choose your credential → set your Board ID and Group ID (these must match where your items live). 2️⃣ Connect Slack API Create an app → https://api.slack.com/apps OAuth & Permissions → add scopes: chat:write (send messages) channels:read, groups:read, users:read (to look up channels and users) Install the app to your workspace → copy the Bot User OAuth Token In n8n → Credentials → New → Slack OAuth2 API → paste token and save In the Slack node (“Alert Team”), select your Slack credential and pick a user or channel. 🧠 How it works Get many items** (Monday.com): pulls items from your board/group Set Columns**: maps item fields (Name, Status, Due Date) Filter for Stuck Items**: keeps only items where Status = "Stuck" Alert Team** (Slack): posts a message like "<Item Name> task is stuck" ✅ Tips Adjust the Status column index/field mapping if your board uses a different column order or a custom status label. Point the Slack node to a channel (for team visibility) or a user (for direct nudges). Add a Schedule Trigger if you want automatic daily/weekly checks. 📬 Contact Need help mapping custom columns or routing alerts by owner? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Antonio Gasso
Overview Stop manually creating folder structures for every new client or project. This workflow provides a simple form where users enter a name, and automatically duplicates your template folder structure in Google Drive—replacing all placeholders with the submitted name. What This Workflow Does Displays a form where users enter a name (client, project, event, etc.) Creates a new main folder in Google Drive Calls Google Apps Script to duplicate your entire template structure Replaces all {{NAME}} placeholders in files and folder names Key Features Simple form interface** — No technical knowledge required to use Recursive duplication** — Copies all subfolders and files Smart placeholders** — Automatically replaces {{NAME}} everywhere Production-ready** — Works immediately after setup Prerequisites Google Drive account with OAuth2 credentials in n8n Google Apps Script deployment (code below) Template folder in Drive using {{NAME}} as placeholder Setup Step 1: Create your template folder 📁 {{NAME}} - Project Files ├── 📁 01. {{NAME}} - Documents ├── 📁 02. {{NAME}} - Assets ├── 📁 03. Deliverables └── 📄 {{NAME}} - Brief.gdoc Step 2: Deploy Apps Script Go to script.google.com Create new project → Paste code below Deploy → New deployment → Web app Execute as: Me | Access: Anyone Copy the deployment URL Step 3: Configure workflow Replace these placeholders: DESTINATION_PARENT_FOLDER_ID — Where new folders are created YOUR_APPS_SCRIPT_URL — URL from Step 2 YOUR_TEMPLATE_FOLDER_ID — Folder to duplicate Step 4: Test Activate workflow → Open form URL → Submit a name → Check Drive! Apps Script Code function doPost(e) { try { var params = e.parameter; var templateFolderId = params.templateFolderId; var name = params.name; var destinationFolderId = params.destinationFolderId; if (!templateFolderId || !name) { return jsonResponse({ success: false, error: 'Missing required parameters: templateFolderId and name are required' }); } var templateFolder = DriveApp.getFolderById(templateFolderId); if (destinationFolderId) { var destinationFolder = DriveApp.getFolderById(destinationFolderId); copyContentsRecursively(templateFolder, destinationFolder, name); return jsonResponse({ success: true, id: destinationFolder.getId(), url: destinationFolder.getUrl(), name: destinationFolder.getName(), mode: 'copied_to_existing', timestamp: new Date().toISOString() }); } else { var parentFolder = templateFolder.getParents().next(); var newFolderName = replacePlaceholders(templateFolder.getName(), name); var newFolder = parentFolder.createFolder(newFolderName); copyContentsRecursively(templateFolder, newFolder, name); return jsonResponse({ success: true, id: newFolder.getId(), url: newFolder.getUrl(), name: newFolder.getName(), mode: 'created_new', timestamp: new Date().toISOString() }); } } catch (error) { return jsonResponse({ success: false, error: error.toString() }); } } function replacePlaceholders(text, name) { var result = text; result = result.replace(/\{\{NAME\}\}/g, name); result = result.replace(/\{\{name\}\}/g, name.toLowerCase()); result = result.replace(/\{\{Name\}\}/g, name); return result; } function copyContentsRecursively(sourceFolder, destinationFolder, name) { var files = sourceFolder.getFiles(); while (files.hasNext()) { try { var file = files.next(); var newFileName = replacePlaceholders(file.getName(), name); file.makeCopy(newFileName, destinationFolder); Utilities.sleep(150); } catch (error) { Logger.log('Error copying file: ' + error.toString()); } } var subfolders = sourceFolder.getFolders(); while (subfolders.hasNext()) { try { var subfolder = subfolders.next(); var newSubfolderName = replacePlaceholders(subfolder.getName(), name); var newSubfolder = destinationFolder.createFolder(newSubfolderName); Utilities.sleep(200); copyContentsRecursively(subfolder, newSubfolder, name); } catch (error) { Logger.log('Error copying subfolder: ' + error.toString()); } } } function jsonResponse(data) { return ContentService .createTextOutput(JSON.stringify(data)) .setMimeType(ContentService.MimeType.JSON); } Use Cases Agencies** — Client folder structure on new signup Freelancers** — Project folders from intake form HR Teams** — Employee onboarding folders Schools** — Student portfolio folders Event Planners** — Event documentation folders Notes Apps Script may take +60 seconds for large structures Timeout is set to 5 minutes for complex templates Your Google account needs edit access to template and destination folders
by Nalin
Discover relevant contacts from target accounts using Octave intelligent prospecting Who is this for? Sales development teams, account-based marketing professionals, and RevOps teams who are tired of generic job title filtering that misses the real decision makers. Built for teams that need to find the right people based on actual responsibilities and business context, not just titles on LinkedIn. What problem does this solve? Most prospecting tools are flying blind when it comes to finding the right contacts. You search for "VP of Engineering" but miss the "Head of Platform" who actually owns your use case. You filter by "Marketing Director" but the "Growth Lead" is the real buyer. Traditional prospecting relies on job title matching, but titles vary wildly across companies. This workflow uses Octave's context engine to find contacts based on who actually does the work your solution impacts, regardless of their specific job title. What this workflow does Target Account Processing: Reads target account lists from Airtable (or other data sources) Processes company domains for intelligent contact discovery Handles batch processing for multiple target accounts Context-Aware Contact Discovery: Uses Octave's prospector agent to find relevant stakeholders within target organizations Leverages your defined personas to identify the right people based on responsibilities, not just titles Analyzes organizational structure, role responsibilities, and KPIs to match contacts to your solution Discovers decision makers and influencers who might be missed by traditional job title searches Structured Contact Output: Returns discovered contacts with complete profile information Includes LinkedIn profiles, contact details, and role context Organizes contacts by relevance and decision-making authority Exports contact lists back to Airtable for sales team action Setup Required Credentials: Octave API key and workspace access Airtable API credentials (or your preferred contact management platform) Access to your target account list Step-by-Step Configuration: Set up Target Account Source: Add your Airtable credentials to n8n Replace your-airtable-base-id and your-accounts-table-id with your actual account list Ensure your account list includes company domains for prospecting Configure trigger method (manual, scheduled, or webhook-based) Configure Octave Prospector Agent: Add your Octave API credentials in n8n Replace your-octave-prospector-agent-id with your actual prospector agent ID Configure your prospector with relevant personas and role definitions Test prospecting with sample companies to verify contact quality Set up Contact Output Destination: Replace your-contacts-table-id with your target contact list table Configure field mapping between Octave output and your contact database Set up data validation and deduplication rules Test contact creation and data formatting Customize Contact Selection: Configure which personas to prioritize in your prospector agent Set relevance thresholds for contact discovery Define organizational levels to target (individual contributors vs. management) Adjust contact volume per account based on your outreach capacity Required Account List Format: Your Airtable (or data source) should include: Company Name Company Domain (required for prospecting) Account status/priority (optional) Target personas (optional) How to customize Prospector Configuration: Customize contact discovery in your Octave prospector agent: Persona Targeting:** Define which of your Library personas to prioritize when prospecting Role Responsibilities:** Configure the specific responsibilities and KPIs that indicate a good fit Organizational Level:** Target specific levels (IC, manager, director, VP, C-level) based on your solution Company Size Adaptation:** Adjust prospecting approach based on organization size and structure Contact Selection Criteria: Refine who gets discovered: Decision-Making Authority:** Prioritize contacts with budget authority or implementation influence Problem Ownership:** Focus on roles that directly experience the pain points your solution solves Technical Influence:** Target contacts who influence technical decisions if relevant to your offering Process Ownership:** Identify people who own the processes your solution improves Data Integration: Adapt for different contact management systems: Replace Airtable with your CRM, database, or spreadsheet system Modify field mapping to match your contact database schema Add data enrichment steps for additional contact information Integrate with email platforms for immediate outreach Batch Processing: Configure for scale: Adjust processing volume based on API limits and prospecting quotas Add scheduling for regular account list updates Implement error handling for accounts that can't be prospected Set up monitoring for prospecting success rates Use Cases Account-based marketing contact list generation Sales territory planning and contact mapping Competitive displacement campaign targeting Product expansion within existing customer accounts Event-based prospecting for specific personas Market research and competitive intelligence gathering
by Avkash Kakdiya
How it works This workflow automatically scrapes LinkedIn job postings for a list of target companies and organizes the results in Google Sheets. Every Monday morning, it checks your company list, runs a LinkedIn job scrape using Phantombuster, waits for the data to be ready, and then fetches the results. Finally, it formats the job postings into a clean structure and saves them into a results sheet for easy analysis. Step-by-step Start with Scheduled Trigger The workflow runs automatically at 9:00 AM every Monday. It reads your “Companies Sheet” in Google Sheets and filters only those marked with Status = Pending. Scrape LinkedIn Jobs The workflow launches your Phantombuster agent with the LinkedIn profile URLs from the sheet. It waits 3 minutes to let the scraper finish running. Then it fetches the output CSV link containing the job posting results. Format the Data The scraped data is cleaned and structured into fields like: Company Name Job Title Job Description Job Link Date Posted Location Employment Type Save Everything in Google Sheets The formatted job data is appended into your “Job Results” Google Sheet. Each entry includes a scrape date so you can track when the data was collected. Why use this? Automates job market research and competitive hiring analysis. Collects structured job posting data from multiple companies at scale. Saves time by running on a schedule with no manual effort. Keeps all results organized in Google Sheets for easy review and sharing. Helps HR and recruitment teams stay ahead of competitors’ hiring activity.