by Mezie
What it does Receives contact details via form, routes to appropriate Wiza API endpoints (email, phone, LinkedIn, or all), enriches data with verification, calculates quality scores (0-100), and stores results in both Airtable and HubSpot. Who's it for Sales teams, recruiters, and marketing ops professionals who need to transform minimal contact info into complete, verified profiles at scale. Requirements n8n (self-hosted or cloud) Wiza API Key (with Email/Phone/LinkedIn Finder access) Airtable API Key (optional) HubSpot API Key (optional) How to set up Import workflow JSON into n8n Configure Wiza, Airtable, and HubSpot credentials Set up Airtable base with required columns (Full Name, Email, Phone, LinkedIn, Data Quality Score) Activate workflow and share the form URL How to customize Adjust quality scoring weights in the Code node Add custom fields to the form trigger Modify Airtable/HubSpot field mappings Change deduplication logic for emails
by Dart
Automatically generate a meeting summary from your meetings through Fathom, save it to a Dart document, and create a review task with the Fathom link attached. What it does This workflow activates when a Fathom meeting ends (via a webhook). It uses an AI model to generate a structured summary of the meeting. The workflow then: Who’s it for Teams or individuals needing automatic meeting notes. Project managers tracking reviews and actions from meetings. Users of Fathom and Dart who want to streamline their documentation and follow-up process. How to set up Import the workflow into n8n. Connect your Dart account (it will need workspace and folder access). Add your PROD webhook link from the webhook node to your Fathom API settings. Replace the dummy Folder ID and Dartboard ID with your actual target IDs. Choose your preferred AI model for generating the summaries. Requirements n8n account Connected Dart account Connected Fathom account (with access to API webhooks) How to customize the workflow Edit the AI prompt to adjust the tone, style, or format of the meeting summaries. Add, remove, or change the summary sections to match your needs (e.g., Key Takeaways, Action Items, Next Items).
by Omer Fayyaz
This n8n template implements a Calendly Booking & Cancellation Automation Hub that automatically processes Calendly webhook events, logs data to Google Sheets, and sends intelligent Slack notifications Who's it for This template is designed for professionals, teams, and businesses who use Calendly for scheduling and want to automate their booking management workflow. It's perfect for: Sales teams** who need instant notifications about new bookings and cancellations Service providers** (consultants, coaches, therapists) who want to track appointments automatically Businesses** that need centralized logging of all booking events for analytics Teams** that want smart categorization of urgent bookings and last-minute cancellations Organizations** requiring automated follow-up workflows based on booking status How it works / What it does This workflow creates a comprehensive Calendly automation system that automatically processes booking confirmations and cancellations. The system: Listens for Calendly events via webhook trigger for: invitee.created - New booking confirmations invitee.canceled - Booking cancellations Routes events intelligently using a Switch node to separate booking and cancellation processing For Bookings: Extracts and transforms all booking data (invitee info, event details, timing, location, guests) Calculates computed fields (formatted dates/times, duration, days until event, urgency flags) Detects urgent bookings (same-day or next-day appointments) for priority handling Logs complete booking information to Google Sheets Sends formatted Slack notifications with meeting links, reschedule/cancel options For Cancellations: Extracts cancellation details (reason, who canceled, timing) Categorizes cancellations into three types: Last Minute (within 24 hours of event) - High priority follow-up Standard (upcoming events) - Normal priority Past Event (already occurred) - Low priority Calculates hours before event for timing analysis Logs cancellation data to Google Sheets Sends categorized Slack alerts with follow-up priority indicators Data Management: Stores all bookings in a dedicated Google Sheets tab Stores all cancellations in a separate Google Sheets tab Maintains complete event history for analytics and reporting How to set up 1. Configure Calendly Webhook Trigger Go to developer.calendly.com Create an OAuth2 application or use Personal Access Token In n8n, add Calendly OAuth2 credentials The workflow automatically registers webhooks for invitee.created and invitee.canceled events Ensure your Calendly account has the necessary permissions 2. Set up Google Sheets Create a Google Sheets spreadsheet with two tabs: Bookings - For logging new booking confirmations Cancellations - For logging cancelled appointments Configure Google Sheets OAuth2 credentials in n8n Update the document ID in both Google Sheets nodes: "Log to Bookings Sheet1" node "Log to Cancellations Sheet" node The workflow uses auto-mapping, so ensure your sheet headers match the data fields 3. Configure Slack Notifications Create a Slack app at api.slack.com Add Bot Token Scopes: chat:write, channels:read Install the app to your workspace Add Slack OAuth2 credentials in n8n Update the channel name in both Slack nodes (default: "general") Customize notification messages if needed 4. Test the Workflow Activate the workflow in n8n Create a test booking in Calendly Verify that: Data appears in Google Sheets Slack notification is received All fields are correctly populated Test cancellation flow by canceling a booking 5. Customize (Optional) Adjust urgency detection logic (currently same-day or next-day) Modify Slack notification formatting Add email notifications using Email nodes Integrate with CRM systems (HubSpot, Salesforce, etc.) Add follow-up email automation Requirements Calendly account** with active scheduling links Google Sheets account** with a spreadsheet set up Slack workspace** with app installation permissions n8n instance** (self-hosted or cloud) OAuth2 credentials** for Calendly, Google Sheets, and Slack How to customize the workflow Modify Urgency Detection Edit the "Check Urgency" IF node to change what constitutes an urgent booking Currently flags same-day or next-day bookings Adjust the days_until_event threshold as needed Enhance Slack Notifications Customize message formatting in Slack nodes Add emoji or formatting to match your team's style Include additional fields from the booking data Add @mentions for urgent bookings Add Email Notifications Insert Email nodes after Slack notifications Send confirmation emails to invitees Notify team members via email Create email templates for different event types Integrate with CRM Add HTTP Request nodes to sync bookings to your CRM Update contact records when bookings are created Create opportunities or deals from booking data Sync cancellation reasons for analysis Add Analytics Create additional Google Sheets tabs for analytics Use formulas to calculate booking rates, cancellation rates Track popular time slots and event types Monitor team member availability Customize Data Fields Modify the "Transform Booking Data" and "Transform Cancellation Data" Set nodes Add custom fields based on your Calendly form questions Extract additional metadata from the webhook payload Calculate business-specific metrics Key Features Automatic event processing** - No manual intervention required Smart urgency detection** - Identifies same-day and next-day bookings automatically Intelligent cancellation categorization** - Classifies cancellations by timing and priority Comprehensive data extraction** - Captures all booking details including guests, questions, and metadata Dual logging system** - Separate sheets for bookings and cancellations Rich Slack notifications** - Formatted messages with meeting links and action buttons Computed fields** - Automatically calculates duration, days until event, formatted dates/times Error handling** - Nodes configured with continueRegularOutput to prevent workflow failures Scalable architecture** - Handles high-volume booking scenarios Use Cases Sales team automation** - Instant notifications when prospects book demos Consultant scheduling** - Track all client appointments in one place Service business management** - Monitor bookings and cancellations for service providers Team calendar coordination** - Keep team members informed about schedule changes Analytics and reporting** - Build dashboards from logged booking data Customer relationship management** - Sync booking data with CRM systems Follow-up automation** - Trigger email sequences based on booking status Resource planning** - Analyze booking patterns to optimize scheduling Data Fields Captured Booking Data Event ID, invitee name, email, first name Event name, start/end times (ISO format) Formatted date and time (human-readable) Timezone, duration in minutes Meeting URL (Google Meet, Zoom, etc.) Reschedule and cancel URLs Location type (virtual, in-person, etc.) Guest count and guest emails Questions and answers (JSON format) Days until event, same-day flag Urgency status and label Processing timestamp Cancellation Data Event ID, invitee name, email Original scheduled date and time Cancellation reason Who canceled (invitee/host) Canceler type Hours before event Last-minute flag (< 24 hours) Cancellation category and priority Cancellation timestamp Workflow Architecture The workflow uses a routing pattern to handle different event types: Calendly Webhook Trigger → Receives all events Route Event Type (Switch) → Separates bookings from cancellations Parallel Processing → Each path processes independently Data Transformation → Set nodes extract and format data Intelligent Routing → IF/Switch nodes categorize by urgency/type Data Logging → Google Sheets stores all events Notifications → Slack alerts team members Example Scenarios Scenario 1: New Booking Customer books a 30-minute consultation for tomorrow Workflow detects it's a next-day booking (urgent) Data logged to "Bookings" sheet with urgency flag Slack notification sent with 🚨 URGENT label Team member receives instant alert Scenario 2: Last-Minute Cancellation Customer cancels meeting 2 hours before scheduled time Workflow categorizes as "last-minute" cancellation Data logged to "Cancellations" sheet with high priority Slack alert sent with 🚨 LAST MINUTE label Team can immediately follow up or fill the slot Scenario 3: Standard Cancellation Customer cancels meeting 3 days in advance Workflow categorizes as "standard" cancellation Data logged with normal priority Slack notification sent with standard formatting Team can plan accordingly This template transforms your Calendly scheduling into a fully automated booking management system, ensuring no booking goes unnoticed and providing valuable insights into your scheduling patterns and customer behavior.
by 21CEL
How it works This workflow runs a spider job in the background via Scrapyd, using a YAML config that defines selectors and parsing rules. When triggered, it schedules the spider with parameters (query, project ID, page limits, etc.). The workflow polls Scrapyd until the job finishes. Once complete, it fetches the output items, enriches them (parse JSONL, deduplicate, extract ID/part number/make/model/part name, normalize price), sorts results, and returns structured JSON. Optional debug outputs such as logs, HTML dumps, and screenshots are also collected. How to use Use the manual trigger for testing, or replace it with webhook, schedule, or other triggers. Adjust the runtime parameters (q, project_id, pages, etc.) directly in the workflow when running. The background spider config (YAML and spider code) must be updated separately — this workflow only orchestrates and enriches results, it does not define scraping logic. Requirements Scrapyd service for job scheduling & status tracking A deployed spider with a valid YAML config (adjust selectors there) JSON Lines output (items.jl) from the spider Endpoints for optional artifacts (logs, HTML, screenshots) n8n with HTTP, Wait, Code, and Aggregate nodes enabled Customising this workflow Update the YAML config if the target website structure changes Modify the enrichment code to extract different fields (e.g., categories, ratings) Adjust deduplication (cheapest, newest, or other logic) Toggle debug retrieval depending on performance/storage needs Extend webhook response to integrate with databases, APIs, or downstream workflows
by Ruthwik
n8n Google Sheets Monthly Order Logger This n8n template records incoming e-commerce orders into Google Sheets, auto-creates a monthly sub-sheet, and adds a “Status” dropdown so your team can track fulfillment at a glance. Use cases Centralize order logs, coordinate shipping across months, trigger customer updates (e.g., WhatsApp/Email) from status changes, and build lightweight ops dashboards. Good to know The Google Sheet ID is the part in the URL between /d/ and the next slash: https://docs.google.com/spreadsheets/d/<sheetId>/. A new sub-sheet is created every month (sheet name = current month, e.g., “September 2025”). If it already exists, the workflow appends to it. The Status column uses data validation with these options: Not Shipped, Pickup Scheduled, Shipped, InTransit, Delivered, Cancelled. Make sure the Google credential in n8n has edit access to the spreadsheet. The Webhook URL must be updated in your Shopify Settings → Notifications → Webhooks page with the required Order events (e.g., Order creation, Order update, Order fulfillment). Reference: Shopify Webhooks Guide How it works Order created (Webhook/Trigger): Receives a new order payload from your store/stack. Config (set spreadsheetId): Stores the target Google Sheets spreadsheetId (copied from the URL). Get Order Sheets metadata: Lists existing tabs to see if the tab for the current month already exists. Generate Sheet Name: Computes the sheet name like {{ $now.format('MMMM YYYY') }}. If (sheet exists?): True → Google Sheets Row values (existing): Prepares the row for append using the month tab. Append to Existing Orders Sheet: Appends the order as a new row. False → Set Sheet Starting row/col: Sets starting cell (e.g., A1) for a brand-new month tab. Create Month Sheet: Creates a new tab named for the current month. Write Headers (A1:…): Writes the column headers. Google Sheets Row values: Maps payload fields into the header order and applies validation to Status. Append to Orders Sheet: Appends the first row into the newly created month tab. How to use In Config, paste your spreadsheetId from the sheet URL and confirm your Google credential has edit access. (Optional) Adjust the month-tab naming format to match your preference. In Shopify → Settings → Notifications → Webhooks, add your n8n webhook URL and select the Order events (Order creation, Order update, Order fulfillment, etc.) you want to capture. Deploy the workflow and send a sample order to the trigger; a new month tab will be created automatically on the first order of each month. Requirements n8n instance with the Google Sheets node credential configured. A Google Spreadsheet you own or can edit. A Shopify store with webhook events enabled (see Shopify Webhooks Guide). Customising this workflow Add/remove columns (e.g., taxes, discounts, warehouse notes). Change the Status list or add conditional formatting (e.g., green = Delivered). Chain automations: on Status update → send tracking links, COD confirmation, or delivery feedback forms.
by vinci-king-01
Smart Blockchain Monitor with ScrapeGraphAI Risk Detection and Instant Alerts 🎯 Target Audience Cryptocurrency traders and investors DeFi protocol managers and developers Blockchain security analysts Financial compliance officers Crypto fund managers and institutions Risk management teams Blockchain developers monitoring smart contracts Digital asset custodians 🚀 Problem Statement Manual blockchain monitoring is time-consuming and prone to missing critical events, often leading to delayed responses to high-value transactions, security threats, or unusual network activity. This template solves the challenge of real-time blockchain surveillance by automatically detecting, analyzing, and alerting on significant blockchain events using AI-powered intelligence and instant notifications. 🔧 How it Works This workflow automatically monitors blockchain activity in real-time, uses ScrapeGraphAI to intelligently extract transaction data from explorer pages, performs sophisticated risk analysis, and instantly alerts your team about significant events across multiple blockchains. Key Components Blockchain Webhook - Real-time trigger that activates when new blocks are detected Data Normalizer - Standardizes blockchain data across different networks ScrapeGraphAI Extractor - AI-powered transaction data extraction from blockchain explorers Risk Analyzer - Advanced risk scoring based on transaction patterns and values Smart Filter - Intelligently routes only significant events for alerts Slack Alert System - Instant formatted notifications to your team 📊 Risk Analysis Specifications The template performs comprehensive risk analysis with the following parameters: | Risk Factor | Threshold | Score Impact | Description | |-------------|-----------|--------------|-------------| | High-Value Transactions | >$10,000 USD | +15 per transaction | Individual transactions exceeding threshold | | Block Volume | >$1M USD | +20 points | Total block transaction volume | | Block Volume | >$100K USD | +10 points | Moderate block transaction volume | | Failure Rate | >10% | +15 points | Percentage of failed transactions in block | | Multiple High-Value | >3 transactions | Alert trigger | Multiple large transactions in single block | | Critical Failure Rate | >20% | Alert trigger | Extremely high failure rate indicator | Risk Levels: High Risk**: Score ≥ 50 (Immediate alerts) Medium Risk**: Score ≥ 25 (Standard alerts) Low Risk**: Score < 25 (No alerts) 🌐 Supported Blockchains | Blockchain | Explorer | Native Support | Transaction Detection | |------------|----------|----------------|----------------------| | Ethereum | Etherscan | ✅ Full | High-value, DeFi, NFT | | Bitcoin | Blockchair | ✅ Full | Large transfers, institutional | | Binance Smart Chain | BscScan | ✅ Full | DeFi, high-frequency trading | | Polygon | PolygonScan | ✅ Full | Layer 2 activity monitoring | 🛠️ Setup Instructions Estimated setup time: 15-20 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Slack workspace with webhook or bot token Blockchain data source (Moralis, Alchemy, or direct node access) Basic understanding of blockchain explorers Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure proper functionality 3. Set up Slack Integration Add Slack OAuth2 or webhook credentials Configure your target channel for blockchain alerts Test message delivery to ensure notifications work Customize alert formatting preferences 4. Configure Blockchain Webhook Set up the webhook endpoint for blockchain data Configure your blockchain data provider (Moralis, Alchemy, etc.) Ensure webhook payload includes block number and blockchain identifier Test webhook connectivity with sample data 5. Customize Risk Parameters Adjust high-value transaction threshold (default: $10,000) Modify risk scoring weights based on your needs Configure blockchain-specific risk factors Set failure rate thresholds for your use case 6. Test and Validate Send test blockchain data to trigger the workflow Verify ScrapeGraphAI extraction accuracy Check risk scoring calculations Confirm Slack alerts are properly formatted and delivered 🔄 Workflow Customization Options Modify Risk Analysis Adjust high-value transaction thresholds per blockchain Add custom risk factors (contract interactions, specific addresses) Implement whitelist/blacklist address filtering Configure time-based risk adjustments Extend Blockchain Support Add support for additional blockchains (Solana, Cardano, etc.) Customize explorer URL patterns Implement chain-specific transaction analysis Add specialized DeFi protocol monitoring Enhance Alert System Add email notifications alongside Slack Implement severity-based alert routing Create custom alert templates Add alert escalation rules Advanced Analytics Add transaction pattern recognition Implement anomaly detection algorithms Create blockchain activity dashboards Add historical trend analysis 📈 Use Cases Crypto Trading**: Monitor large market movements and whale activity DeFi Security**: Track protocol interactions and unusual contract activity Compliance Monitoring**: Detect suspicious transaction patterns Institutional Custody**: Alert on high-value transfers and security events Smart Contract Monitoring**: Track contract interactions and state changes Market Intelligence**: Analyze blockchain activity for trading insights 🚨 Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays to avoid overwhelming blockchain explorers Keep your API credentials secure and rotate them regularly Monitor API usage to manage costs effectively Consider blockchain explorer rate limits for high-frequency monitoring Ensure compliance with relevant financial regulations Regularly update risk parameters based on market conditions 🔧 Troubleshooting Common Issues: ScrapeGraphAI extraction errors: Check API key and account status Webhook trigger failures: Verify webhook URL and payload format Slack notification failures: Check bot permissions and channel access False positive alerts: Adjust risk scoring thresholds Missing transaction data: Verify blockchain explorer accessibility Rate limit errors: Implement delays and monitor API usage Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Blockchain explorer API documentation Slack API documentation for advanced configurations Cryptocurrency compliance and regulatory guidelines
by vinci-king-01
Copyright Infringement Detector with ScrapeGraphAI Analysis and Legal Action Automation 🎯 Target Audience Intellectual property lawyers and legal teams Brand protection specialists Content creators and publishers Marketing and brand managers Digital rights management teams Copyright enforcement agencies Media companies and publishers E-commerce businesses with proprietary content Software and technology companies Creative agencies protecting client work 🚀 Problem Statement Manual monitoring for copyright infringement is time-consuming, often reactive rather than proactive, and can miss critical violations that damage brand reputation and revenue. This template solves the challenge of automatically detecting copyright violations, analyzing infringement patterns, and providing immediate legal action recommendations using AI-powered web scraping and automated legal workflows. 🔧 How it Works This workflow automatically scans the web for potential copyright violations using ScrapeGraphAI, analyzes content similarity, determines legal action requirements, and provides automated alerts for immediate response to protect intellectual property rights. Key Components Schedule Trigger - Runs automatically every 24 hours to monitor for new infringements ScrapeGraphAI Web Search - Uses AI to search for potential copyright violations across the web Content Comparer - Analyzes potential infringements and calculates similarity scores Infringement Detector - Determines legal action required and creates case reports Legal Action Trigger - Routes cases based on severity and urgency Brand Protection Alert - Sends urgent alerts for high-priority violations Monitoring Alert - Tracks medium-risk cases for ongoing monitoring 📊 Detection and Analysis Specifications The template monitors and analyzes the following infringement types: | Infringement Type | Detection Method | Risk Level | Action Required | |-------------------|------------------|------------|-----------------| | Exact Text Match | High similarity score (>80%) | High | Immediate cease & desist | | Paraphrased Content | Moderate similarity (50-80%) | Medium | Monitoring & evidence collection | | Unauthorized Brand Usage | Brand name detection in content | High | Legal consultation | | Competitor Usage | Known competitor domain detection | High | DMCA takedown | | Image/Video Theft | Visual content analysis | High | Immediate action | | Domain Infringement | Suspicious domain patterns | Medium | Investigation | 🛠️ Setup Instructions Estimated setup time: 30-35 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Telegram or other notification service credentials Legal team contact information Copyrighted content database Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Schedule Trigger Configure the monitoring frequency (default: every 24 hours) Adjust timing to match your business hours Set appropriate timezone for your legal team 4. Configure Copyrighted Content Database Update the Content Comparer node with your protected content Add brand names, slogans, and unique phrases Include competitor and suspicious domain lists Set similarity thresholds for different content types 5. Customize Legal Action Rules Update the Infringement Detector node with your legal thresholds Configure action plans for different infringement types Set up case priority levels and response timelines Define evidence collection requirements 6. Set up Alert System Configure Telegram bot or other notification service Set up different alert types for different severity levels Configure legal team contact information Test alert delivery and formatting 7. Test and Validate Run the workflow manually with test search terms Verify all detection steps complete successfully Test alert system with sample infringement data Validate legal action recommendations 🔄 Workflow Customization Options Modify Detection Parameters Adjust similarity thresholds for different content types Add more sophisticated text analysis algorithms Include image and video content detection Customize brand name detection patterns Extend Legal Action Framework Add more detailed legal action plans Implement automated cease and desist generation Include DMCA takedown automation Add court filing preparation workflows Customize Alert System Add integration with legal case management systems Implement tiered alert systems (urgent, high, medium, low) Add automated evidence collection and documentation Include reporting and analytics dashboards Output Customization Add integration with legal databases Implement automated case tracking Create compliance reporting systems Add trend analysis and pattern recognition 📈 Use Cases Brand Protection**: Monitor unauthorized use of brand names and logos Content Protection**: Detect plagiarism and content theft Legal Enforcement**: Automate initial legal action processes Competitive Intelligence**: Monitor competitor content usage Compliance Monitoring**: Ensure proper attribution and licensing Evidence Collection**: Automatically document violations for legal proceedings 🚨 Important Notes Respect website terms of service and robots.txt files Implement appropriate delays between requests to avoid rate limiting Regularly review and update copyrighted content database Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Ensure compliance with local copyright laws and regulations Consult with legal professionals before taking automated legal action Maintain proper documentation for all detected violations 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status False positive detections: Adjust similarity thresholds and detection parameters Alert delivery failures: Check notification service credentials Legal action errors: Verify legal team contact information Schedule trigger failures: Check timezone and interval settings Content analysis errors: Review the Code node's JavaScript logic Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Copyright law resources and best practices Legal automation and compliance guidelines Brand protection and intellectual property resources
by phil
Generate royalty-free sound effects for all your projects: ASMR, YouTube videos, podcasts, and more. This workflow generates unique AI-powered sound effects using the ElevenLabs Sound Effects API. Enter a text description of the sound you envision, and the workflow will generate it, save the MP3 file to your Google Drive, and instantly provide a link to listen to your creation. It is a powerful tool for quickly producing unique ASMR triggers, ambient sounds, or specific audio textures without any complex software. Who's it for This template is ideal for: Content Creators**: Generate royalty-free sound effects for videos, podcasts, and games on the fly. Sound Designers & Foley Artists**: Quickly prototype and generate specific audio clips for projects from a simple text prompt. Developers & Hobbyists**: Integrate AI sound effect generation into projects or simply experiment with the capabilities of the ElevenLabs API. How to set up Configure API Key: Sign up for an ElevenLabs account and get your API key. In the "ElevenLabs API" node, create new credentials and add your ElevenLabs API key. Connect Google Drive: Select the "Upload mp3" node. Create new credentials to connect your Google Drive account. Activate the Workflow: Save and activate the workflow. Use the Form Trigger's production URL to access the AI ASMR Sound Generator web form. Requirements An active n8n instance. An ElevenLabs account for the API key. A Google Drive account. How to customize this workflow Change Storage**: Replace the Google Drive node with another storage service node like Dropbox, AWS S3, or an FTP server to save your sound effects elsewhere. Modify Sound Parameters**: In the "elevenlabs_api" node, you can adjust the JSON body to control the output. Key parameters include: loop (boolean, optional, default: false): Creates a sound effect that loops smoothly. Note: Only available for the ‘eleven_text_to_sound_v2’ model. duration_seconds (number, optional, default: auto): Sets the sound's duration in seconds (from 0.5 to 30). If not set, the AI guesses the optimal duration from the prompt. prompt_influence (number, optional, default: 0.3): A value between 0 and 1 that controls how strictly the generation follows the prompt. Higher values result in less variation. Customize Confirmation Page**: Edit the "prepare reponse" node to change the design and text of the final page shown to the user. . Phil | Inforeole | Linkedin 🇫🇷 Contactez nous pour automatiser vos processus
by Hassan
AI-Powered Personalized Cold Email Icebreaker Generator Overview This intelligent automation system transforms generic cold outreach into highly personalized email campaigns by automatically scraping prospect websites, analyzing their content with AI, and generating unique, conversational icebreakers that reference specific, non-obvious details about each business. The workflow integrates seamlessly with Instantly.ai to deliver campaigns that achieve significantly higher response rates compared to traditional cold email approaches. The system processes leads from your n8n data table, validates contact information, scrapes multiple pages from each prospect's website, uses GPT-4.1 to synthesize insights, and crafts personalized openers that make recipients believe you've done deep research on their business—all without manual intervention. Key Benefits 🎯 Hyper-Personalization at Scale: Generate unique icebreakers for 30+ leads per execution that reference specific details about each prospect's business, creating the impression of manual research while automating 100% of the process. 💰 Dramatically Higher Response Rates: Personalized cold emails using this system typically achieve 4-5% response rates for campaigns, directly translating to more booked meetings and closed deals. ⏱️ Massive Time Savings: What would take 10-15 minutes of manual research per prospect (website review, note-taking, icebreaker writing) now happens in 30-45 seconds automatically, freeing your team to focus on conversations instead of research. 🧠 AI-Powered Intelligence: Dual GPT model approach uses GPT-4.1-mini for efficient content summarization and GPT-4.1 for creative icebreaker generation, ensuring both cost efficiency and high-quality output with a distinctive "spartan" tone that converts. 🔄 Built-In Error Handling: Comprehensive retry logic (5 attempts with 5-second delays) and graceful failure management ensure the workflow continues processing even when websites are down or inaccessible, automatically removing problem records from your queue. 🗃️ Clean Data Management: Automatically removes processed leads from your database after successful campaign addition, preventing duplicate outreach and maintaining organized lead lists for future campaigns. 📊 Batch Processing Control: Processes leads in configurable batches (default 30) to manage API costs and rate limits while maintaining efficiency, with easy scaling for larger lists. 🔌 Instantly.ai Integration: Direct API integration pushes leads with custom variables into your campaigns automatically, supporting skip_if_in_campaign logic to prevent duplicate additions and maintain clean campaign lists. How It Works Stage 1: Lead Acquisition & Validation The workflow begins with a manual trigger, allowing you to control when processing starts. It queries your n8n data table and retrieves up to 30 records filtered by Email_Status. The Limit node caps this at 30 items to control processing costs and API usage. Records then pass through the "Only Websites & Emails" filter, which uses strict validation to ensure both organization_website_url and email fields exist and contain data—eliminating invalid records before expensive AI processing occurs. Stage 2: Intelligent Web Scraping Valid leads enter the Loop Over Items batch processor, which handles them sequentially to manage API rate limits. For each lead, the workflow fetches their website homepage using the HTTP Request node with retry logic (5 attempts, 5-second waits) and "always output data" enabled to capture even failed requests. The If node checks response names for error indicators, if errors are detected, the problematic record is immediately deleted from the database via Delete row(s) to prevent future processing waste. Successfully scraped HTML content passes through the Markdown converter, which transforms it into clean markdown format that AI models can analyze more effectively. Stage 3: AI Content Analysis The markdown content flows into the first AI node, "Summarize Website Page," which uses GPT-4.1-mini (cost-efficient for summarization tasks) with a specialized system prompt. The AI reads the scraped content and generates a comprehensive two-paragraph abstract similar in detail to an academic paper abstract, focusing on what the business does, their projects, services, and unique differentiators. The output is structured JSON with an "abstract" field. Multiple page summaries (if the workflow is extended to scrape additional pages) are collected by the Aggregate node, which combines all abstracts into a single array for comprehensive analysis. Stage 4: Personalized Icebreaker Generation The aggregated summaries, along with prospect profile data (name, headline, company), flow into the "Generate Multiline Icebreaker" node powered by GPT-4.1 (higher intelligence for creative writing). This node uses an advanced system prompt with specific rules: write in a spartan/laconic tone, avoid special characters and hyphens, use the format "Really Loved {thing}, especially how you're {doing/managing/handling} {otherThing}," reference small non-obvious details (never generic compliments like "Love your website!"), shorten company names and locations naturally. The prompt includes a few-shot example teaching the AI the exact style and depth expected. Temperature is set to 0.5 for creative but consistent output. Stage 5: Campaign Deployment & Cleanup The generated icebreaker is formatted into Instantly.ai's API structure and sent via HTTP POST to the "Sending ice breaker to instantly" node. The payload includes the lead's email, first name, last name, company name, the personalized icebreaker as the "personalization" field, website URL, and supports custom_variables for additional personalization fields. The API call uses skip_if_in_campaign: true to prevent duplicate additions. After successful campaign addition, the Delete row(s)1 node removes the processed record from your data table, maintaining a clean queue. The Loop Over Items node then processes the next lead until all 30 are complete. Required Setup & Database Structure n8n Data Table Requirements: Table Name: Configurable (default "Real estate") Required Columns: id (unique identifier for each record) first_name (prospect's first name) last_name (prospect's last name) email (valid email address) organization_website_url (full URL with https://) Headline (job title/company descriptor) Email_Status (filter field for processing control) API Credentials: OpenAI API Key (connected as "Sycorda" credential) Access to GPT-4.1-mini model Access to GPT-4.1 model Sufficient credits for batch processing (approximately $0.01-0.03 per lead) Instantly.ai API Key Campaign ID (replace the placeholder "00000000-0000-0000-0000-000000000000") Active campaign with proper email accounts configured Environment Setup: n8n instance with @n8n/n8n-nodes-langchain package installed Stable internet connection for web scraping Adequate execution timeout limits (recommended 5+ minutes for 30 leads) Business Use Cases B2B Service Providers: Agencies, consultancies, and professional services firms can personalize outreach by referencing prospect's specific service offerings, client types, or operational approach to book discovery calls and consultations. SaaS Companies: Software vendors across any vertical can use this to demonstrate product value through highly relevant cold outreach that references prospect pain points, tech stack, or business model visible on their websites. Marketing & Creative Agencies: Agencies offering design, content creation, SEO, or digital marketing services can personalize outreach by referencing prospects' current marketing approach, website quality, or brand positioning. E-commerce & Retail: Online retailers and D2C brands can reach potential wholesale partners, distributors, or B2B clients by mentioning their product lines, target markets, or unique value propositions. Financial Services: Fintech companies, accounting firms, and financial advisors can personalize cold outreach by referencing prospect's business size, industry focus, or financial complexity to offer relevant solutions. Recruitment & Staffing: Agencies can reach potential clients by mentioning their hiring needs, company growth, team structure, or industry specialization visible on career pages and about sections. Technology & Development: Software development agencies, IT consultancies, and tech vendors can reference prospect's current technology stack, digital transformation initiatives, or technical challenges to position relevant solutions. Education & Training: Corporate training providers, coaching services, and educational platforms can personalize outreach by mentioning company culture, team development focus, or learning initiatives referenced on websites. Revenue Potential Same icebreaker approach used by leading cold email experts delivers 4-5% higher reply rates compared to generic outreach templates. By investing approximately $0.11-0.18 per personalized lead (AI processing + email sending costs), businesses achieve response rates of 4-5% versus the industry standard non-personalized campaigns. Scalability: Process 30 leads (or any much you want just replace the number 30 with your number) and in minutes with minimal manual oversight, allowing sales teams to maintain high personalization quality while reaching hundreds of prospects weekly. The automation handles the research-intensive work, letting your team focus on high-value conversations with engaged prospects. Difficulty Level & Build Time Difficulty: Intermediate Estimated Build Time: 2-3 hours for complete setup Technical Requirements: Familiarity with n8n node configuration Basic understanding of API integrations JSON data structure knowledge OpenAI prompt engineering basics Setup Complexity Breakdown: Data table creation and population: 30 minutes Workflow node configuration: 45 minutes OpenAI credential setup and testing: 20 minutes Instantly.ai API integration: 25 minutes Prompt optimization and testing: 45 minutes Error handling verification: 15 minutes Maintenance Requirements: Minimal once configured. Monthly tasks include monitoring OpenAI costs, updating prompts based on performance data, and refilling the data table with new leads. Detailed Setup Steps Step 1: Create Your Data Table In n8n, navigate to your project Create a new data table with a name relevant to your industry Add columns: id (auto), first_name (text), last_name (text), email (text), organization_website_url (text), Headline (text), Email_Status (text) Import your lead list via CSV or manual entry Set Email_Status to blank or a specific value you'll filter by Step 2: Configure OpenAI Credentials Obtain an OpenAI API key from platform.openai.com In n8n, go to Credentials → Add Credential → OpenAI Name it "Sycorda" (or update all OpenAI nodes with your credential name) Paste your API key and test the connection Ensure your OpenAI account has access to GPT-4.1 models Step 3: Import and Configure the Workflow Copy the provided workflow JSON In n8n, create a new workflow and paste the JSON Update the "Get row(s)" node: Select your data table Configure the Email_Status filter condition Adjust limit if needed (default 30) Verify the "Loop Over Items" node has reset: false Step 4: Configure Website Scraping In "Request web page for URL1" node, verify: URL expression references correct field: {{ $('Get row(s)').item.json.organization_website_url }} Retry settings: 5 attempts, 5000ms wait "Always Output Data" is enabled Test with a single lead to verify HTML retrieval Step 5: Customize AI Prompts for Your Industry In "Summarize Website Page" node: Review the system prompt Adjust the abstract detail level if needed Keep JSON output enabled In "Generate Multiline Icebreaker" node: CRITICAL: Update the few-shot example with your target industry specifics Customize the tone guidance to match your brand voice Modify the icebreaker format template if desired Adjust temperature (0.5 default; lower for consistency, higher for variety) Update the profile format to match your industry (change "Property Manager or Real estate" references) Step 6: Set Up Instantly.ai Integration Log into your Instantly.ai account Navigate to Settings → API Key and copy your key Create or select the campaign where leads will be added Copy the Campaign ID from the URL (format: 00000000-0000-0000-0000-000000000000) In the "Sending ice breaker to instantly" node: Update the JSON body with your api_key Replace the campaign_id placeholder Adjust skip_if_in_workspace and skip_if_in_campaign flags Map the lead fields correctly: email: {{ $('Loop Over Items').item.json.email }} first_name: {{ $('Loop Over Items').item.json.first_name }} last_name: {{ $('Loop Over Items').item.json.last_name }} personalization: {{ $json.message.content.icebreaker }} company_name: Extract from Headline or add to data table website: {{ $('Loop Over Items').item.json.organization_website_url }} Step 7: Test and Validate Start with 3-5 test leads in your data table Execute the workflow manually Verify each stage: Data retrieval from table Website scraping success AI summary generation Icebreaker quality and format Instantly.ai lead addition Database cleanup Check your Instantly.ai campaign to confirm leads appear with custom variables Review error handling by including one lead with an invalid website Step 8: Scale and Monitor Increase batch size in the Limit node (30 → 50+ if needed) Add more leads to your data table Set up execution logs to monitor costs Track response rates in Instantly.ai A/B test prompt variations to optimize icebreaker performance Consider scheduling automatic execution with n8n's Schedule Trigger Advanced Customization Options Multi-Page Scraping: Extend the workflow to scrape additional pages (about, services, portfolio) by adding multiple HTTP Request nodes after the first scrape, then modify the Aggregate node to combine all page summaries before icebreaker generation. Industry-Specific Prompts: Create separate workflow versions with customized prompts for different verticals or buyer personas to maximize relevance and response rates for each segment. Dynamic Campaign Routing: Add Switch or If nodes after icebreaker generation to route leads to different Instantly.ai campaigns based on company size, location, or detected business focus from the AI analysis. Sentiment Analysis: Insert an additional OpenAI node after summarization to analyze the prospect's website tone and adjust your icebreaker style accordingly (formal vs. casual, technical vs. conversational). CRM Integration: Replace or supplement the data table with direct CRM integration (HubSpot, Salesforce, Pipedrive) to pull leads and push results back, creating a fully automated lead enrichment pipeline. Competitor Mention Detection: Add a specialized prompt to the summarization phase that identifies if prospects mention competitors or specific pain points, then use this intelligence in the icebreaker for even higher relevance. LinkedIn Profile Enrichment: Add Clay or Clearbit integration before the workflow to enrich email lists with LinkedIn profile data, then reference recent posts or career changes in the icebreaker alongside website insights. A/B Testing Framework: Duplicate the "Generate Multiline Icebreaker" node with different prompt variations and use a randomizer to split leads between versions, then track performance in Instantly.ai to identify the highest-converting approach. Webhook Trigger: Replace the manual trigger with a webhook that fires when new leads are added to your data table or CRM, creating a fully automated lead-to-campaign pipeline that requires zero manual intervention. Cost Optimization: Replace GPT-4.1 models with GPT-4o-mini or Claude models for cost savings if response quality remains acceptable, or implement a tiered approach where only high-value leads get premium model processing.
by Adrian
📘 Overview This workflow automates end-to-end social media publishing powered by Late API. It generates text content with Google Gemini, creates branded visuals with Kie.ai, uploads media to Late, and publishes across multiple platforms (Facebook, Instagram, LinkedIn, TikTok). It’s a production-ready automation for marketing teams who want to save hours of work by letting AI handle both copywriting and design — all inside n8n. ⚙️ How it works Generate text content → Google Gemini produces platform-optimized copy (tone & length adapted to each network). Generate visuals → Kie.ai Seedream v4 creates branded 1080x1080 images. Upload to Late → media is stored using Late’s upload API (small & large file handling). Publish → posts are created via Late API on enabled platforms with correct { platform, accountId } mapping. Notify → success logs are sent via Slack, Discord, Email, and Webhook. 🛠 Setup Steps Time to setup:** ~10–15 minutes Steps:** Add your API keys in n8n Credentials: Google Gemini API (PaLM) Kie.ai (Seedream) Late API Insert your Account IDs (Facebook, Instagram, LinkedIn, TikTok) into the Default Settings node. Choose which platforms to enable (ENABLE_FACEBOOK, ENABLE_INSTAGRAM, etc.). Set your Business Type and Content Topic (e.g., “a tech company” / “new product launch”). Execute the workflow. 📝 Notes Sticky Notes** are included in the workflow to guide each section: Overview, Prerequisites, Default Settings, Content Generation, Image Generation, Media Upload, Publishing Logic, Notifications, Error Handling. All API keys are handled via Credentials (no hardcoding). Fallback content is included in case Gemini fails to parse. Large image files (>4MB) are handled with Late’s multipart upload flow. 💸 Cost per Flow (Estimated) Late API**: $0.00 within Free/Unlimited plans, or ≈ $0.11/post on Build plan ($13/120 posts). Google Gemini**: ~$0.0001–$0.0004 per post (≈200 tokens in/out). Kie.ai (Seedream)**: ≈ $0.01–$0.02 per generated image. ➡️ Total: ~$0.01 – $0.12 per post, depending mainly on your Late plan & Kie.ai credits. 🎯 Use cases Marketing teams automating cross-platform campaigns. Solo founders posting content daily without design/copy effort. Agencies scaling social media management with AI + automation. 📢 Credits Built by Adrian (RoboMarketing) for the n8n Arena Challenge – September 2025. Powered by: Gemini API Kie.ai Seedream Late API
by vinci-king-01
Carbon Footprint Tracker with ScrapeGraphAI Analysis and ESG Reporting Automation 🎯 Target Audience Sustainability managers and ESG officers Environmental compliance teams Corporate social responsibility (CSR) managers Energy and facilities managers Supply chain sustainability coordinators Environmental consultants Green building certification teams Climate action plan coordinators Regulatory compliance officers Corporate reporting and disclosure teams 🚀 Problem Statement Manual carbon footprint calculation and ESG reporting is complex, time-consuming, and often inaccurate due to fragmented data sources and outdated emission factors. This template solves the challenge of automatically collecting environmental data, calculating accurate carbon footprints, identifying reduction opportunities, and generating comprehensive ESG reports using AI-powered data collection and automated sustainability workflows. 🔧 How it Works This workflow automatically collects energy and transportation data using ScrapeGraphAI, calculates comprehensive carbon footprints across all three scopes, identifies reduction opportunities, and generates automated ESG reports for sustainability compliance and reporting. Key Components Schedule Trigger - Runs automatically every day at 8:00 AM to collect environmental data Energy Data Scraper - Uses ScrapeGraphAI to extract energy consumption data and emission factors Transport Data Scraper - Collects transportation emission factors and fuel efficiency data Footprint Calculator - Calculates comprehensive carbon footprint across Scope 1, 2, and 3 emissions Reduction Opportunity Finder - Identifies cost-effective carbon reduction opportunities Sustainability Dashboard - Creates comprehensive sustainability metrics and KPIs ESG Report Generator - Automatically generates ESG compliance reports Create Reports Folder - Organizes reports in Google Drive Save Report to Drive - Stores final reports for stakeholder access 📊 Carbon Footprint Analysis Specifications The template calculates and tracks the following emission categories: | Emission Scope | Category | Data Sources | Calculation Method | Example Output | |----------------|----------|--------------|-------------------|----------------| | Scope 1 (Direct) | Natural Gas | EPA emission factors | Consumption × 11.7 lbs CO2/therm | 23,400 lbs CO2 | | Scope 1 (Direct) | Fleet Fuel | EPA fuel economy data | Miles ÷ MPG × 19.6 lbs CO2/gallon | 11,574 lbs CO2 | | Scope 2 (Electricity) | Grid Electricity | EPA emission factors | kWh × 0.92 lbs CO2/kWh | 46,000 lbs CO2 | | Scope 3 (Indirect) | Employee Commute | EPA transportation data | Miles × 0.77 lbs CO2/mile | 19,250 lbs CO2 | | Scope 3 (Indirect) | Air Travel | EPA aviation factors | Miles × 0.53 lbs CO2/mile | 26,500 lbs CO2 | | Scope 3 (Indirect) | Supply Chain | Estimated factors | Electricity × 0.1 multiplier | 4,600 lbs CO2 | 🛠️ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Drive API access for report storage Organizational energy and transportation data ESG reporting requirements and standards Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Schedule Trigger Configure the daily schedule (default: 8:00 AM UTC) Adjust timezone to match your business hours Set appropriate frequency for your reporting needs 4. Configure Data Sources Update the Energy Data Scraper with your energy consumption sources Configure the Transport Data Scraper with your transportation data Set up organizational data inputs (employees, consumption, etc.) Customize emission factors for your region and industry 5. Customize Carbon Calculations Update the Footprint Calculator with your organizational data Configure scope boundaries and calculation methodologies Set up industry-specific emission factors Adjust for renewable energy and offset programs 6. Configure Reduction Analysis Update the Reduction Opportunity Finder with your investment criteria Set up cost-benefit analysis parameters Configure priority scoring algorithms Define implementation timelines and effort levels 7. Set up Report Generation Configure Google Drive integration for report storage Set up ESG report templates and formats Define stakeholder access and permissions Test report generation and delivery 8. Test and Validate Run the workflow manually with test data Verify all calculation steps complete successfully Check data accuracy and emission factor validity Validate ESG report compliance and formatting 🔄 Workflow Customization Options Modify Data Collection Add more energy data sources (renewables, waste, etc.) Include additional transportation modes (rail, shipping, etc.) Integrate with building management systems Add real-time monitoring and IoT data sources Extend Calculation Framework Add more Scope 3 categories (waste, water, etc.) Implement industry-specific calculation methodologies Include carbon offset and credit tracking Add lifecycle assessment (LCA) capabilities Customize Reduction Analysis Add more sophisticated ROI calculations Implement scenario modeling and forecasting Include regulatory compliance tracking Add stakeholder engagement metrics Output Customization Add integration with sustainability reporting platforms Implement automated stakeholder notifications Create executive dashboards and visualizations Add compliance monitoring and alert systems 📈 Use Cases ESG Compliance Reporting**: Automate sustainability disclosure requirements Carbon Reduction Planning**: Identify and prioritize reduction opportunities Regulatory Compliance**: Meet environmental reporting mandates Stakeholder Communication**: Generate transparent sustainability reports Investment Due Diligence**: Provide ESG data for investors and lenders Supply Chain Sustainability**: Track and report on Scope 3 emissions 🚨 Important Notes Respect data source terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update emission factors for accuracy Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Ensure compliance with local environmental reporting regulations Validate calculations against industry standards and methodologies Maintain proper documentation for audit and verification purposes 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Data source access issues: Check website accessibility and rate limits Calculation errors: Verify emission factors and organizational data Report generation failures: Check Google Drive permissions and quotas Schedule trigger failures: Check timezone and cron expression Data accuracy issues: Validate against manual calculations and industry benchmarks Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance EPA emission factor databases and methodologies GHG Protocol standards and calculation guidelines ESG reporting frameworks and compliance requirements Sustainability reporting best practices and standards
by CompanyEnrich
This workflow automates enriching company profiles by taking a domain name from a Google Sheet, fetching firmographic data via the CompanyEnrich API, and updating the sheet with the results. Who is this for? Sales Teams:** To enrich lead lists with better data Marketing Professionals:** To segment potential accounts based on industry or location Recruiters:** To gather background information on target companies Data Analysts:** To rapidly clean and populate missing firmographic datasets What it does The workflow pulls rows from a specified Google Sheet It checks a "Status" column to ensure it only processes rows that haven't been completed yet (skips rows marked "Done"). Using the company Domain column, it queries the CompanyEnrich API A custom code node flattens the JSON response and automatically matches the API data to the columns currently existing in your Google Sheet It writes the enriched data back to the row and marks the status as "Done" Requirements A Google account with access to Sheets. You will need an API key from CompanyEnrich. How to set up Prepare your Google Sheet: Create a sheet with the following mandatory headers: Domain, Status, and Last Updated. Add Data Columns: Add headers for the data you want to fetch (e.g., revenue, employees, location_city_name, socials_linkedin_url). Configure Credentials: Connect your Google Sheets account in the "Get row(s)" and "Update row" nodes. Select the Sheet: Update the Document and Sheet Name in both Google Sheets nodes to point to your specific file. Add API Key: Open the "Fetch Company Data" node and replace the placeholder in the Authorization header with your actual API Key (format: Bearer YOUR_API_KEY). How to customize Fetch Specific Data:** Because of the dynamic Javascript logic, you do not need to edit the workflow to get different data. Simply add a new column header to your Google Sheet that matches the API field name (e.g., adding a column named industries will automatically fetch and fill that data). Adjust Throttling:** If you have a large dataset, you may need to adjust the "SplitInBatches" node to process fewer items at once to avoid API rate limits.