by Shahzaib Anwar
π Overview This workflow automatically processes incoming Shopify/Gmail leads and pushes them into HubSpot as both Contacts and Deals. It helps sales and marketing teams capture leads instantly, enrich CRM data, and avoid missed opportunities. β‘ How it works Trigger: Watches for new emails in Gmail. Extract Data: Parses email body (Name, Email, City, Phone, Message, Product URL/Title). Condition: Checks if sender is Shopify before processing. HubSpot: Creates/updates a Contact with customer details. Creates a Deal associated with that contact. π― Benefits π₯ Automates lead capture β CRM π« Eliminates manual copy-paste from Gmail π Real-time sync between Gmail and HubSpot π Improves sales follow-up speed and accuracy π Setup Steps Import this workflow into your n8n instance. Connect your Gmail and HubSpot credentials. Replace the HubSpot Deal Stage ID with your own pipeline stage. (Optional) Adjust the Code Node regex if your email format differs. Activate the workflow and test with a sample lead email. π Example Email Format Name: John Doe Email: john@example.com City: London Phone: +44 7000 000000 Body: Interested in product Product Url: https://example.com/product Product Title: Sample Product sticky_notes: name: Gmail Trigger note: > π§ Watches for new emails in Gmail. Polls every minute and passes email data into the flow. name: Get a Message note: > π© Fetches the full Gmail message content (body + metadata) for parsing. name: Extract From Email note: > π Extracts the senderβs email address from Gmail to identify the source. name: If Sender is Shopify note: > β Condition node that ensures only Shopify-originated emails/leads are processed. name: Code Node (Regex Parser) note: > π§Ύ Parses the email body using regex to extract Name, Email, City, Phone, Message, Product URL, and Title. name: Edit Fields (Set Node) note: > π Cleans and structures the extracted fields into proper JSON format before sending to HubSpot. name: HubSpot β Create/Update Contact note: > π€ Creates or updates a HubSpot Contact with the extracted lead details. name: HubSpot β Create Deal note: > πΌ Creates a HubSpot Deal linked to the Contact, including campaign/product information.
by Khairul Muhtadin
Decodo Amazon Product Recommender delivers instant, AI-powered shopping recommendations directly through Telegram. Send any product name and receive Amazon product analysis featuring price comparisons, ratings, sales data, and categorized recommendations (budget, premium, best value) in under 40 secondsβeliminating hours of manual research. Why Use This Workflow? Time Savings: Reduce product research from 45+ minutes to under 30 seconds Decision Quality: Compare 20+ products automatically with AI-curated recommendations Zero Manual Work: Complete automation from message input to formatted recommendations Ideal For E-commerce Entrepreneurs:** Quickly research competitor products, pricing strategies, and market trends for inventory decisions Smart Shoppers & Deal Hunters:** Get instant product comparisons with sales volume data and discount tracking before purchasing Product Managers & Researchers:** Analyze Amazon marketplace positioning, customer sentiment, and pricing ranges for competitive intelligence How It Works Trigger: User sends product name via Telegram (e.g., "iPhone 15 Pro Max case") AI Validation: Gemini 2.5 Flash extracts core product keywords and validates input authenticity Data Collection: Decodo API scrapes Amazon search results, extracting prices, ratings, reviews, sales volume, and product URLs Processing: JavaScript node cleans data, removes duplicates, calculates value scores, and categorizes products (top picks, budget, premium, best value, most popular) Intelligence Layer: AI generates personalized recommendations with Telegram-optimized markdown formatting, shortened product names, and clean Amazon URLs Output & Delivery: Formatted recommendations sent to user with categorized options and direct purchase links Error Handling: Admin notifications via separate Telegram channel for workflow monitoring Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Decodo Account | Essential | Amazon product data scraping | | Telegram Bot Token | Essential | Chat interface for user interactions | | Google Gemini API | Essential | AI-powered product validation and recommendations | | Telegram Account | Optional | Admin error notifications | Installation Steps Import the JSON file to your n8n instance Configure credentials: Decodo API: Sign up at decodo.com β Dashboard β Scraping APIs β Web Advanced β Copy BASIC AUTH TOKEN Telegram Bot: Message @BotFather on Telegram β /newbot β Copy HTTP API token (format: 123456789:ABCdefGHI...) Google Gemini: Obtain API key from Google AI Studio for Gemini 2.5 Flash model Update environment-specific values: Replace YOUR-CHAT-ID in "Notify Admin" node with your Telegram chat ID for error notifications Verify Telegram webhook IDs are properly configured Customize settings: Adjust AI prompt in "Generate Recommendations" node for different output formats Set character limits (default: 2500) for Telegram message length Test execution: Send test message to your Telegram bot: "iPhone 15 Pro" Verify processing status messages appear Confirm recommendations arrive with properly formatted links Customization Options Basic Adjustments: Character Limit**: Modify 2500 in AI prompt to adjust response length (Telegram max: 4096) Advanced Enhancements: Multi-language Support**: Add language detection and translation nodes for international users Price Tracking**: Integrate Google Sheets to log historical prices and trigger alerts on drops Image Support**: Enable Telegram photo messages with product images from scraping results Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "No product detected" for valid inputs | AI validation too strict or ambiguous query | Add specific product details (model number, brand) in user input | | Empty recommendations returned | Decodo API rate limit or Amazon blocking | Wait 60 seconds between requests; verify Decodo account status | | Telegram message formatting broken | Special characters in product names | Ensure Telegram markdown mode is set to "Markdown" (legacy) not "MarkdownV2" | Use Case Examples Scenario 1: E-commerce Store Owner Challenge: Needs to quickly assess competitor pricing and product positioning for new inventory decisions without spending hours browsing Amazon Solution: Sends "wireless earbuds" to bot, receives categorized analysis of 20+ products with price ranges ($15-$250), top sellers, and discount opportunities Result: Identifies $35-$50 price gap in market, sources comparable product, achieves 40% profit margin Scenario 2: Smart Shopping Enthusiast Challenge: Wants to buy a laptop backpack but overwhelmed by 200+ Amazon options with varying prices and unclear value propositions Solution: Messages "laptop backpack" to bot, gets AI recommendations sorted by budget ($30), premium ($50+), best value (highest discount + good ratings), and most popular (by sales volume) Result: Purchases "Best Value" recommendation with 35% discount, saves $18 and 45 minutes of research time Created by: Khaisa Studio Category: AI | Productivity | E-commerce | Tags: amazon, telegram, ai, product-research, shopping, automation, gemini Need custom workflows? Contact us Connect with the creator: Portfolio β’ Workflows β’ LinkedIn β’ Medium β’ Threads
by vinci-king-01
How it works This workflow automatically analyzes website visitors in real-time, enriches their data with company intelligence, and provides lead scoring and sales alerts. Key Steps Webhook Trigger - Receives visitor data from your website tracking system. AI-Powered Company Intelligence - Uses ScrapeGraphAI to extract comprehensive company information from visitor domains. Visitor Enrichment - Combines visitor behavior data with company intelligence to create detailed visitor profiles. Lead Scoring - Automatically scores leads based on company size, industry, engagement, and intent signals. CRM Integration - Updates your CRM with enriched visitor data and lead scores. Sales Alerts - Sends real-time notifications to your sales team for high-priority leads. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for company intelligence gathering. Set up HubSpot connection - Connect your HubSpot CRM to automatically update contact records. Configure Slack integration - Set up your Slack workspace and specify the sales alert channel. Customize lead scoring criteria - Adjust the scoring algorithm to match your target customer profile. Set up website tracking - Configure your website to send visitor data to the webhook endpoint. Test the workflow - Verify all integrations are working correctly with a test visitor. Key Features Real-time visitor analysis** with company intelligence enrichment Automated lead scoring** based on multiple factors (company size, industry, engagement) Intent signal detection** (pricing interest, demo requests, contact intent) Priority-based sales alerts** with recommended actions CRM integration** for seamless lead management Deal size estimation** based on company characteristics
by Ranjan Dailata
Who this is for This workflow is designed for: Automation engineers building AI-powered data pipelines Product managers & analysts needing structured insights from web pages Researchers & content teams extracting summaries from documentation or articles HR, compliance, and knowledge teams converting unstructured web content into structured records n8n self-hosted users leveraging advanced scraping and LLM enrichment It is ideal for anyone who wants to transform any public URL into structured data + clean summaries automatically. What problem this workflow solves Web content is often unstructured, verbose, and inconsistent, making it difficult to: Extract structured fields reliably Generate consistent summaries Reuse data across spreadsheets, dashboards, or databases Eliminate manual copy-paste and interpretation This workflow solves the problem of turning arbitrary web pages into machine-readable JSON and human-readable summaries, without custom scrapers or manual parsing logic. What this workflow does The workflow integrates Decodo, Google Gemini, and Google Sheets to perform automated extraction of structured data. Hereβs how it works step-by-step: Input Setup The workflow begins when the user executes it manually or passes a valid URL. The input includes url. Profile Extraction with Decodo Accepts any valid URL as input Scrapes the page content using Decodo Uses Google Gemini to: Extract structured data in JSON format Generate a concise, factual summary Cleans and parses AI-generated JSON safely Merges structured data and summary output Stores the final result in Google Sheets for reporting or downstream automation JSON Parsing & Merging The Code Node cleans and parses the JSON output from the AI for reliable downstream use. The Merge Node combines both structured data and the AI-generated summary. Data Storage in Google Sheets The Google Sheets Node appends or updates the record, storing the structured JSON and summary into a connected spreadsheet. End Output A unified, machine-readable data in JSON + an executive-level summary suitable data analysis or downstream automation. Setup Instructions Prerequisites n8n account** with workflow editor access Decodo API credentials** - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard Google Gemini (PaLM) API access** Google Sheets OAuth credentials** Setup Steps Import the workflow into your n8n instance. Configure Credentials Add your Decodo API credentials in the Decodo node. Connect your Google Gemini (PaLM) credentials for both AI nodes. Authenticate your Google Sheets account. Edit Input Node In the Set the Input Fields node, replace the default URL with your desired profile or dynamic data source. Run the Workflow Trigger manually or via webhook integration for automation. Verify that structured profile data and summary are written to the linked Google Sheet. How to customize this workflow to your needs You can easily extend or adapt this workflow: Modify Structured Output Change the Gemini extraction prompt to match your own JSON schema Add required fields such as authors, dates, entities, or metadata Improve Summarization Adjust summary length or tone (technical, executive, simplified) Add multi-language summarization using Gemini Change Output Destination Replace Google Sheets with: Databases (Postgres, MySQL) Notion Slack / Email File storage (JSON, CSV) Add Validation or Filtering Insert IF nodes to: Reject incomplete data Detect errors or hallucinated output Trigger alerts for malformed JSON Scale the Workflow Replace manual trigger with: Webhook Scheduled trigger Batch URL processing Summary This workflow provides a powerful, generic solution for converting unstructured web pages into structured, AI-enriched datasets. By combining Decodo for scraping, Google Gemini for intelligence, and Google Sheets for persistence, it enables repeatable, scalable, and production-ready data extraction without custom scrapers or brittle parsing logic.
by Trung Tran
Automated AWS IAM Compliance Workflow for MFA Enforcement and Access Key Deactivation > This workflow leverages AWS IAM APIs and n8n automation to ensure strict security compliance by continuously monitoring IAM users for MFA (Multi-Factor Authentication) enforcement. .jpg) Whoβs it for This workflow is designed for DevOps, Security, or Cloud Engineers responsible for maintaining IAM security compliance in AWS accounts. It's ideal for teams who want to enforce MFA usage and automatically disable access for non-compliant IAM users. How it works / What it does This automated workflow performs a daily check to detect IAM users without an MFA device and deactivate their access keys. Step-by-step: Daily scheduler: Triggers the workflow once a day. Get many users: Retrieves a list of all IAM users in the account. Get IAM User MFA Devices: Calls AWS API to get MFA device info for each user. Filter out IAM users with MFA: Keeps only users without any MFA device. Send warning message(s): Sends Slack alerts for users who do not have MFA enabled. Get User Access Key(s): Fetches access keys for each non-MFA user. Parse the list of user access key(s): Extracts and flattens key information like AccessKeyId, Status, and UserName. Filter out inactive keys: Keeps only active access keys for further action. Deactivate Access Key(s): Calls AWS API to deactivate each active key for non-MFA users. How to set up Configure AWS credentials in your environment (IAM role or AWS access key with required permissions). Connect Slack via the Slack node for alerting (set channel and credentials). Set the scheduler to your preferred frequency (e.g., daily at 9AM). Adjust any Slack message template or filtering conditions as needed. Requirements IAM user or role credentials with the following AWS IAM permissions: iam:ListUsers iam:ListMFADevices iam:ListAccessKeys iam:UpdateAccessKey Slack credentials (Bot token with chat:write permission). n8n environment with: Slack integration AWS credentials (set via environment or credentials manager) How to customize the workflow Alert threshold**: Instead of immediate deactivation, you can delay action (e.g., alert first, wait 24h, then disable). Change notification channel**: Modify the Slack node to send alerts to a different channel or add email integration. Whitelist exceptions**: Add a Set or IF node to exclude specific usernames (e.g., service accounts). Add audit logging**: Use Google Sheets, Airtable, or a database to log which users were flagged or had access disabled. Extend access checks**: Include console password check (GetLoginProfile) if needed.
by Abdullah Alshiekh
What Problem Does It Solve? Weβve all been there: you want to check if a product is cheaper on Amazon or Jumia, but opening a dozen tabs is a pain. Building a bot to do this usually fails because big e-commerce sites love to block scrapers with CAPTCHAs. This workflow fixes that headache by: Taking a product name from a chat message. Using Decodo to handle the hard partβsearching Google and scraping the product pages without getting blocked. Using AI to read the messy HTML and pull out just the price and product name. Sending a clean "Best Price" summary back to the user instantly. How to Configure It Telegram Setup Create a bot with BotFather and paste your token into the Telegram node. Make sure your webhook is set up so the bot actually "hears" the messages. Decodo This is the engine that makes the workflow reliable. You'll need to add your Decodo API key in the credentials. We used Decodo here specifically because it handles the proxies and browser fingerprinting for youβso your Amazon requests actually go through instead of failing. AI Setup Plug in your OpenAI API key (or swap the node for Claude/Gemini if you prefer). The system prompt is already set up to ignore ads and find the real price, but feel free to tweak the tone. How It Works Trigger: You text the bot a product name (e.g., "Sony XM5"). Search: The workflow asks Decodo to Google that specific term on sites like Amazon.eg. Scrape: It grabs the URLs and passes them back to Decodo to fetch the page content safely. Extract: The AI reads through the text, finds the lowest price, and ignores the clutter. Reply: The bot texts you back with the best deal found. Customization Ideas Go wider:** Edit the search query to check other stores like Noon or Carrefour. Track trends:** Connect a Google Sheet to log what people are searching forβgreat for market research. If you need any help Get In Touch
by Incrementors
Description Submit your page URL, a competitor's page URL, and a target keyword using a simple form. The workflow automatically scrapes both pages, strips all HTML, and sends the full comparison to GPT-4o-mini for analysis. Within seconds, a structured 6-section content gap report lands in your Slack channel β ready to act on. Built for SEO teams, content strategists, and agency analysts who need fast, repeatable competitor insights. What This Workflow Does Parallel page scraping** β Fetches your page and the competitor's page simultaneously so you get results faster, not one site at a time HTML cleaning** β Strips all scripts, ads, and navigation clutter from both pages, leaving only the actual content GPT-4o-mini needs to compare Content gap identification** β AI pinpoints exactly which topics, subtopics, and questions your page is missing that the competitor already covers Competitive advantage mapping** β Surfaces what your page has that the competitor lacks, so you know what to protect and promote Priority action list** β Delivers 5 concrete, ranked improvements specific to your page β not generic SEO advice Token-efficient processing** β Caps each page at 8,000 characters so every run stays fast and API costs stay predictable Slack report delivery** β Posts the full 6-section analysis with business name, keyword, both URLs, and run date directly to your team channel β ready to act on or forward to a client Setup Requirements Tools Needed n8n instance (self-hosted or cloud) OpenAI account with GPT-4o-mini API access Slack workspace with OAuth2 app configured Estimated Setup Time: 10β15 minutes Step-by-Step Setup Import the workflow β Open n8n β Workflows β Import from JSON β paste the workflow JSON β click Import Connect your OpenAI credential β Go to node 10. OpenAI β GPT-4o-mini Model β click the credential dropdown β add your OpenAI API key β test the connection Connect your Slack credential β Go to node 12. Slack β Send Gap Report β click the credential dropdown β select OAuth2 β follow the Slack OAuth flow to connect your workspace Set your Slack channel β In node 12. Slack β Send Gap Report, set the channel field to the channel name where reports should be posted (e.g. #seo-reports) Activate the workflow β Toggle the workflow to Active β copy the Form URL from node 1. Form β Submit Page URLs β open it in a browser to test > β οΈ Bot-Protected Sites β Some sites return a 403 Forbidden error when scraped. If this happens, open nodes 3. HTTP β Scrape Your Page and 4. HTTP β Scrape Competitor Page, add a header with Name = User-Agent and Value = Mozilla/5.0 (compatible; n8n-bot/1.0) in both nodes. How It Works (Step by Step) Step 1 β Form: Submit Page URLs You open the form URL in a browser and fill in four fields: your page URL, the competitor's page URL, the target keyword, and your business name. Submitting the form kicks off the entire workflow automatically. Step 2 β Set: Extract Form Fields All four form inputs are mapped to clean named variables. A run timestamp is automatically added so every report is dated. These variables flow into every downstream step. Step 3 β HTTP: Scrape Your Page (parallel) An HTTP request fetches the full HTML content of your page. This step runs at the same time as Step 4, so both pages are retrieved simultaneously without waiting. Step 4 β HTTP: Scrape Competitor Page (parallel) An identical HTTP request fetches the competitor's page in parallel with Step 3. Both pages are ready at the same time. Step 5 β Code: Clean Your Page HTML A code step removes all script tags, style tags, and HTML markup from your page. The result is plain readable text, trimmed to 8,000 characters to keep AI costs low and responses fast. Step 6 β Code: Clean Competitor Page HTML The same cleaning process runs on the competitor's page. This step also carries forward all the form variables (keyword, URLs, business name, run date) so nothing is lost in the merge. Step 7 β Merge: Combine Both Pages Both cleaned page texts β yours and the competitor's β flow into a merge step that combines them into a single pipeline for the next step. Step 8 β Code: Combine Page Data A code step safely joins both items into one clean object. If either page failed to scrape, it uses a fallback message instead of crashing the workflow. Step 9 β AI Agent: Gap Analyzer GPT-4o-mini receives both page texts, the target keyword, business name, and both URLs. It produces a plain-text 6-section analysis: keyword usage comparison, topics your page is missing, topics you have that the competitor lacks, content depth and quality comparison, five priority actions ranked by impact, and a quick 3-sentence verdict. Step 10 β OpenAI: GPT-4o-mini Model This is the language model powering the AI Agent. It is configured with a temperature of 0.4 for consistent, factual analysis and a max token limit of 1,500 to keep reports concise. Step 11 β Set: Prepare Slack Message All report fields are assembled into a single clean object: the AI analysis, both URLs, target keyword, business name, and run date. This is the complete payload that goes to Slack. Step 12 β Slack: Send Gap Report The full report is posted to your Slack channel in a formatted message. It includes the business name, keyword, run date, both URLs, the full 6-section AI analysis, and a footer noting the report was generated by n8n + GPT-4o-mini. Key Features β Parallel scraping β Both pages are fetched at the same time, not one after the other, saving you time on every run β Auto HTML stripping β Scripts, styles, and all tags are removed automatically β no manual cleanup needed β Token budget control β Each page is hard-capped at 8,000 characters so API costs stay predictable β Fallback handling β If a page fails to scrape, the workflow continues and notes the failure rather than crashing β 6-section structured report β Every report follows the same format so results are easy to compare across competitors and dates β Slack delivery with metadata β Reports arrive with business name, keyword, run date, and both URLs for full context β Plain text output β No markdown symbols in the AI analysis, making it easy to paste directly into a doc or client report β One-form trigger β The whole workflow starts with a single form submission β no coding, no manual steps Customisation Options Change the text limit per page β In nodes 5. Code β Clean Your Page HTML and 6. Code β Clean Competitor Page HTML, change .substring(0, 8000) to a higher number (e.g. 12000) if you want deeper analysis on long-form pages. Note this will increase GPT token usage. Add email delivery β After node 11. Set β Prepare Slack Message, add a Gmail or SMTP node to also send the report by email. Use the same gapReport variable for the email body. Save reports to Google Sheets β Add a Google Sheets node after the Slack node to log every run: business name, keyword, date, competitor URL, and a summary of the verdict section. Schedule weekly competitor checks β Replace the form trigger with a Schedule trigger and a Set node with hardcoded URLs and keywords to automatically run gap analysis every Monday morning. Expand the AI report sections β In node 9. AI Agent β Gap Analyzer, edit the prompt to add a Section 7 covering suggested internal links, or a Section 8 comparing schema markup signals. Troubleshooting OpenAI credential not working: Confirm you added the API key in node 10. OpenAI β GPT-4o-mini Model, not elsewhere Check that your OpenAI account has available credits Make sure you are using a key with access to GPT-4o-mini (not a restricted key) Scraping returns a 403 or empty result: Add a User-Agent header to both 3. HTTP β Scrape Your Page and 4. HTTP β Scrape Competitor Page Header Name: User-Agent, Value: Mozilla/5.0 (compatible; n8n-bot/1.0) Some enterprise or Cloudflare-protected sites cannot be scraped β try the mobile version of the URL instead Slack message not arriving: Confirm the OAuth2 credential in node 12. Slack β Send Gap Report is connected and authorised Check that the channel name is correct and the bot has been invited to that channel In Slack, go to the channel β click the channel name β Integrations β confirm the n8n app is listed Report is too short or generic: The page text may have been mostly scripts with little readable content β check the cleaned text in node 5 or 6 by running a test Try a different URL format (e.g. without trailing slash) or the AMP version of the page Increase the max token setting in node 10 from 1500 to 2000 for more detailed output Form submission not triggering the workflow: Make sure the workflow is set to Active (toggle in the top right of the workflow editor) Copy the Form URL fresh from node 1. Form β Submit Page URLs after activating β inactive workflows generate a test URL, not a live one Support Need help setting this up or want a custom version built for your team or agency? π§ Email: info@incrementors.com π Website: https://www.incrementors.com/contact-us/
by Robert Breen
This n8n workflow template automatically processes phone interview transcripts using AI to evaluate candidates against specific criteria and saves the results to Google Sheets. Perfect for HR departments, recruitment agencies, or any business conducting phone screenings. What This Workflow Does This automated workflow: Receives phone interview transcripts via webhook Uses OpenAI GPT models to analyze candidate responses against predefined qualification criteria Extracts key information (name, phone, location, qualification status) Automatically saves structured results to a Google Sheet for easy review and follow-up The workflow is specifically designed for driving job interviews but can be easily adapted for any position with custom evaluation criteria. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered transcript analysis (GPT-4o-mini) Google Sheets** - Data storage and management Webhook** - Receiving transcript data Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI transcript processing Google Account - For Google Sheets integration Phone Interview System - That can send webhooks (like Vapi.ai) Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Create Your Google Sheet Option 1: Use Our Pre-Made Template (Recommended) Copy our template: Driver Interview Results Template Click "File" β "Make a copy" to create your own version Rename it as desired Copy your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Name it "Driver Interview Results" (or your preferred name) Set up the following column headers in row 1: A1: name B1: phone C1: cityState D1: qualifies E1: reasoning Copy the Google Sheet URL - you'll need this for the workflow Step 3: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows β Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on either "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration Click on the "Save to Google Sheets" node Set up Google Sheets OAuth2 credentials Select your spreadsheet from the dropdown Choose the correct sheet (usually "Sheet1") Update the Webhook Click on the "Webhook" node Note the webhook URL that n8n generates This URL will receive your transcript data Step 4: Customize Evaluation Criteria The workflow includes predefined criteria for a Massachusetts driving job. To customize for your needs: Click on the "Evaluate Candidate" node Modify the system message to include your specific requirements Update the evaluation criteria checklist Adjust the JSON output format if needed Current Evaluation Criteria: Valid Massachusetts driver's license No felony convictions Clean driving record (no recent tickets/accidents) Willing to complete background check Can pass drug test (including marijuana) Available full-time Monday-Friday Lives in Massachusetts Step 5: Connect to Vapi.ai (Phone Interview System) This workflow is specifically designed to work with Vapi.ai's phone interview system. Here's how to connect it: Setting Up the Vapi Integration Copy Your N8N Webhook URL In your n8n workflow, click on the "Webhook" node Copy the webhook URL (it should look like: https://your-n8n-instance.com/webhook-test/351ffe7c-69f2-4657-b593-c848d59205c0) Configure Your Vapi Assistant Log into your Vapi.ai dashboard Create or edit your phone interview assistant In the assistant settings, find the "Server" section Set the Server URL to your n8n webhook URL Set timeout to 20 seconds (as configured in the workflow) Configure Server Messages In your Vapi assistant settings, enable these server messages: end-of-call-report transcript[transcriptType="final"] Set Up the Interview Script Use the provided interview script in your Vapi assistant (found in the workflow's system message) This ensures consistent data collection for the AI evaluation Expected Data Format from Vapi The workflow expects Vapi to send data in this specific format: { "body": { "message": { "artifact": { "transcript": "AI: Hi. Are you interested in driving for Bank of Transport?\nUser: Yes.\nAI: Great. Before we go further..." } } } } Vapi Configuration Checklist β Webhook URL set in Vapi assistant server settings β Server messages enabled: end-of-call-report, transcript[transcriptType="final"] β Interview script configured in assistant β Assistant set to send webhooks on call completion Alternative Phone Systems If you're not using Vapi.ai, you can adapt this workflow for other phone systems by: Modifying the "Edit Fields2" node to extract transcripts from your system's data format Updating the webhook data structure expectations Ensuring your phone system sends the complete interview transcript Step 6: Test the Workflow Test with Sample Data Use the "Execute Workflow" button with test data Verify that data appears correctly in your Google Sheet Check that the AI evaluation logic works as expected End-to-End Testing Send a test webhook with a real transcript Monitor each step of the workflow Confirm the final result is saved to Google Sheets Workflow Node Breakdown Webhook - Receives transcript data from your phone system Edit Fields2 - Extracts the transcript from the incoming data Evaluate Candidate - AI analysis using GPT-4o-mini to assess qualification Convert to JSON - Ensures proper JSON formatting with structured output parser Save to Google Sheets - Automatically logs results to your spreadsheet Customization Options Modify Evaluation Criteria Edit the system prompt in the "Evaluate Candidate" node Add or remove qualification requirements Adjust the scoring logic Change Output Format Modify the JSON schema in the "Structured Output Parser" node Update Google Sheets column mapping accordingly Add Additional Processing Insert nodes for email notifications Add Slack/Discord alerts for qualified candidates Integrate with your CRM or ATS system Troubleshooting Common Issues: OpenAI API Errors**: Check API key validity and billing status Google Sheets Not Updating**: Verify OAuth permissions and sheet access Webhook Not Receiving Data**: Confirm URL and POST format from your phone system AI Evaluation Inconsistencies**: Refine the system prompt with more specific criteria Usage Tips Monitor Token Usage**: OpenAI charges per token, so monitor your usage Regular Review**: Periodically review AI evaluations for accuracy Backup Data**: Export Google Sheets data regularly for backup Privacy Compliance**: Ensure transcript handling complies with local privacy laws Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: rbreen@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-interactive Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
by Oneclick AI Squad
This AI-powered workflow monitors trending topics across multiple social platforms, generates creative post ideas with captions and visual suggestions, and recommends optimal posting times based on engagement data. How it works Trigger - Runs on schedule or webhook to start trend monitoring Fetch Trends - Pulls trending topics from Twitter, Reddit, Google Trends Wait & Aggregate - Allows trend data to settle for better analysis Filter & Parse - JavaScript code filters relevant trends for your niche AI Content Generation - Claude creates post ideas, captions, hashtags Visual Suggestions - Recommends image/video concepts Wait for Analysis - Pauses before engagement time calculation Optimal Timing - JavaScript calculates best posting times Log & Track - Records all ideas in Google Sheets Response - Returns ready-to-use content ideas Setup Steps Import this workflow into your n8n instance Configure credentials: Twitter API v2 - For trending hashtags and topics Reddit API - For subreddit trending posts Google Trends (no auth) - For search trends Anthropic API - For Claude AI content generation Google Sheets - To track generated ideas Update your brand profile and niche in the config node Set your target social platforms and audience Activate the workflow Sample Trigger Payload { "platforms": ["twitter", "instagram", "linkedin"], "niche": "AI & Technology", "trendSources": ["twitter", "reddit", "google"], "contentTypes": ["educational", "entertaining", "news"], "targetAudience": "tech professionals, 25-45", "brandVoice": "professional yet approachable", "minTrendScore": 60, "maxIdeasPerTrend": 3, "includeVisuals": true } Features Multi-platform trend monitoring** (Twitter, Reddit, Google Trends) AI-powered content generation** with brand voice matching Visual concept suggestions** for each post idea Optimal timing recommendations** based on engagement patterns Hashtag strategy** with trending and niche tags Content calendar integration** via Google Sheets Duplicate prevention** - tracks used trends Performance tracking** - logs which ideas perform best
by Pratyush Kumar Jha
Deep Multiline Icebreaker β AI-driven research + personalized cold outreach Deep Multiline Icebreaker automates high-quality, research-led outreach. Feed it a list of leads (emails + websites) and a short product brief via the built-in form; the workflow scrapes each company's site, extracts meaningful page content, uses GPT to produce concise page abstracts, aggregates insights, and then generates tailored, multi-line cold email bodies (JSON). Final outreach rows are appended automatically to a Google Sheet so you can review, sequence, or plug into your outreach stack. This template is built for SDRs, growth folks, and agencies who want dramatically better reply rates by replacing generic blasts with short, highly-specific icebreakers that reference subtle site signals. Itβs opinionated (focuses on non-obvious details and concise, credible tone) but easy to tweak β prompts, output format, and the Google Sheet mapping are all editable inside n8n. How it works Form trigger β you submit product details, target designation, location, etc. Leads fetch β the workflow calls an external leads scraper (Apify act) to retrieve potential contacts. Filter & normalize β only rows with website + email proceed; links are normalized (relative/absolute handling). Scrape & convert β homepage and linked pages are fetched and converted to Markdown for clean input. Summarize (GPT) β each page is summarized into a two-paragraph abstract. Aggregate & generate β abstracts are aggregated and GPT generates a tailored multi-line icebreaker JSON (subject + body). Append to Google Sheets β resulting outreach content + lead metadata is appended to your sheet. Nodes of interest you can edit On form submission1 Leads Scraper1 Scrape Home1 Summarize Website Page1 Generate Multiline Icebreaker1 Add Row1 Quick Setup Guide π Demo & Setup Video π Sheet Template π Course What youβll need (credentials) OpenAI API key (used by Summarize Website Page1 and Generate Multiline Icebreaker1). Google Sheets OAuth (write access for Add Row1). Apify (or your leads-source) API token for Leads Scraper1 (the template calls an Apify act). Optional: outbound HTTP access from your n8n host to target websites. Recommended settings & best practices Limit batch sizes** (the template uses Limit1 set to 3 by default) β ramp the maxItems up slowly to respect rate limits and token costs. Prompt tweaks** β open the Generate Multiline Icebreaker1 prompt to tune tone, cost framing, or add product-specific selling points. Deduplication** β Remove Duplicate URLs1 is included; keep it ON to avoid repeated scraping. Privacy** β donβt store PII longer than necessary; if you store outreach drafts, ensure your Google Sheet access is restricted. Cost control** β set temperature lower (0β0.6) for more consistent outputs and monitor your OpenAI usage. Customization ideas Swap GPT model name or change prompt to produce shorter cold SMS or LinkedIn messages. Replace Apify with your own lead source (CSV upload, CRM query, or Airtable). Add an approval step (Slack/Email) before rows are appended to Google Sheets. Add a follow-up sequence generator that writes 2β3 follow-up messages per lead. Troubleshooting quick tips If pages return empty abstracts, check Request web page for URL1 and network access / user-agent restrictions. If outputs are malformed JSON, open the Generate Multiline Icebreaker1 node and validate the JSON output option. If Google Sheets fails, re-authorize the Google Sheets credential and ensure the sheet ID & sheet name are correct. Tags / Suggested listing fields outreach, lead-gen, sales-automation, openai, web-scraping, google-sheets
by Davide
This workflow is designed to fully automate the creation and publishing of Instagram marketing content by combining AI-powered text generation with Bytedance Seedream 4.0, image creation, and social media scheduling into a single streamlined pipeline. Starting from a simple prompt and reference images, the workflow generates both the visual asset and the caption, then automatically publishes the final content to Instagram. It eliminates the need for manual design, copywriting, and posting, enabling a faster and more scalable content production process. | BEFORE | AFTER | |--------|-------| | | | Key Advantages 1. β End-to-End Automation The workflow fully automates the process from initial input (prompt + reference images) to final delivery (Instagram post), eliminating manual content creation and publishing steps. 2. β AI-Powered Content Creation It leverages advanced AI models to: Generate engaging, platform-optimized Instagram captions Create branded visual assets from prompts and reference images Ensure content is creative, relevant, and ready for social media 3. β Automated Visual Generation Integrates AI image editing (Kie AI Seedream) to automatically produce high-quality, branded visuals without the need for design tools or manual editing. 4. β Social Media Optimization Captions are generated with built-in best practices, including: Platform-specific tone and structure Hashtag optimization for reach and discoverability Engagement-driven calls to action 5. β Scalability The workflow structure allows easy scaling to generate and publish large volumes of content across campaigns with minimal effort. 6. β Modular & Extensible Design Each step (input setup, caption generation, image creation, upload, publishing) is modular, making it easy to: Swap AI models Add new social platforms Extend functionality with additional services 7. β Automated Publishing & Scheduling Integration with Postiz enables seamless upload and scheduling of posts directly to Instagram, streamlining the entire publishing pipeline. 8. β Reliability with Asynchronous Processing The wait/resume mechanism ensures that image generation tasks are completed before proceeding, improving workflow stability and preventing failures. 9. β Consistency & Brand Alignment Using structured prompts and predefined logic ensures all generated content maintains a consistent brand voice, visual identity, and messaging across posts. How it works This workflow automates the creation of a branded visual + Instagram caption, then posts the result to Instagram via Postiz. Manual trigger starts the workflow. Set params defines: A creative prompt for generating an images A reference logo URL Normalize converts the image URL(s) into a JSON array. The workflow splits into two parallel branches: Image generation branch: Calls Kie AI Seedream 4.0 Edit API with the prompt + image URL(s) Waits for async completion via a webhook Extracts the generated image URL Downloads the image binary Caption generation branch: Sends the prompt to an OpenAI Chat Model Uses a Social Media Manager LLM chain to produce an Instagram caption with emojis and hashtags Parses the JSON output into a clean caption Merge combines the downloaded image and the caption. Upload IG Image sends the image to Postizβs upload endpoint. Instagram node schedules/posts the image + caption to Instagram. Set up steps Add credentials: OpenAi account I β OpenAI API key Kie AI β Token from Kie.ai API credentials Postiz account β Postiz API credentials Adjust the prompt (optional): Edit the Set params node β PROMPT value to change the image generation text Confirm webhook for async wait: The Wait node uses a resume webhook β ensure the webhook URL is reachable by Kie AI Validate Postiz integration: Check that the Instagram node has the correct Postiz integration ID Activate the workflow (if scheduling needed, replace Manual Trigger with a schedule trigger) Execute manually to test the full flow from prompt β image β caption β Instagram post π Subscribe to my new YouTube channel. Here Iβll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Amirul Hakimi
π Enrich CRM Leads with LinkedIn Company Data Using AI Who's it for Sales teams, marketers, and business development professionals who need to automatically enrich their CRM records with detailed company information from LinkedIn profiles. Perfect for anyone doing B2B outreach who wants to personalize their messaging at scale. What it does This workflow transforms bare-bones lead records into rich, personalized prospect profiles by: Automatically scraping LinkedIn company profiles Using AI (GPT-4) to extract key business intelligence Generating 15+ email-ready personalization variables Updating your CRM with structured, actionable data The workflow pulls company overviews, products/services, funding information, recent posts, and converts everything into natural-language variables that can be dropped directly into your outreach templates. How it works Trigger: Workflow starts when a new lead is added to Airtable (or on schedule) Fetch: Retrieves the lead record containing the LinkedIn company URL Scrape: Pulls the raw HTML from the company's LinkedIn profile Clean: Strips HTML tags and formats content for AI processing Analyze: GPT-4 extracts structured company intelligence (overview, products, market presence, recent posts) Transform: Converts analysis into 15+ email-ready variables with natural phrasing Update: Writes enriched data back to your CRM Setup Requirements Airtable account** (free tier works fine) OpenAI API key** (GPT-4o-mini recommended for cost-effectiveness) LinkedIn company URLs** stored in your CRM 5 minutes** for initial configuration How to set up Configure Airtable Connection Replace YOUR_AIRTABLE_BASE_ID with your base ID Replace YOUR_TABLE_ID with your leads table ID Ensure your table has a "LinkedIn Organization URL" field Add your Airtable API credentials Add OpenAI Credentials Click on both OpenAI nodes Add your OpenAI API key GPT-4o-mini is recommended (cost-effective and fast) Set Up Trigger Add a trigger node (Schedule, Webhook, or Airtable trigger) Configure to run when new leads are added or on a daily schedule Test the Workflow Add a test lead with a LinkedIn company URL Execute