by Intuz
This n8n template from Intuz provides a complete solution to automate your entire invoicing process. It intelligently syncs confirmed sales orders from your Airtable base to QuickBooks, automatically creating new customers if they don't exist before generating a perfectly matched invoice. It then logs all invoice details back into Airtable, creating a flawless, end-to-end financial workflow. Use Cases 1. Accounting & Finance Teams: Automatically generate QuickBooks invoices from new orders confirmed in Airtable. Keep all invoices and customer details synced across systems in real time. 2. Sales & Operations Teams: Track order status and billing progress directly from Airtable without switching platforms. Ensure every confirmed sale automatically triggers an invoice in QuickBooks. 3. Business Owners / Admins: Eliminate double-entry between Airtable and QuickBooks. Maintain accurate, audit-ready financial records with minimal effort. How it works 1. Trigger from Airtable: The workflow starts instantly when a sales order is ready to be invoiced in your Airtable base (triggered via a webhook). 2. Check for Customer in QuickBooks: It searches your QuickBooks account to see if the customer from the sales order already exists. 3. Create New Customer (If Needed): If the customer is not found, it automatically creates a new customer record in QuickBooks using the details from your Airtable Customers table. 4. Create QuickBooks Invoice: Using the correct customer record (either existing or newly created), it gathers all order line items from Airtable and generates a detailed invoice in QuickBooks. 5. Log Invoice Back to Airtable: After the invoice is successfully created, the workflow updates your Airtable base by adding a new record to your Invoices & Payments table and updating the original Confirmed Orders record with the new QuickBooks Invoice ID, marking it as synced. Key Requirements to Use This Template 1. n8n Instance: An active n8n account (Cloud or self-hosted). 2. Airtable Base: An Airtable base on a "Pro" plan or higher with tables for Confirmed Orders, Customers, Order Lines, Product & Service, and Invoices & Payments. Field names must match those in the setup guide. 3. QuickBooks Online Account: An active QuickBooks Online account with API access. Step-by-Step Setup Instructions Step 1: Import and Configure the n8n Workflow Import Workflow:** In n8n, import the Client-Quickbook-Invoices-via-AirTable.json file. Get Webhook URL:** Click on the first node, "Webhook". Copy the "Test URL". Keep this n8n tab open. Configure Airtable Nodes:** There are six Airtable nodes. For each one, connect your Airtable credentials and select the correct Base and Table. Configure QuickBooks Nodes:** There are four QuickBooks-related nodes. For each one, connect your QuickBooks Online credentials. CRITICAL:** Click on the "Create Invoice URL" (HTTP Request) node. You must edit the URL and replace the placeholder number (9341455145770046) with your own QuickBooks Company ID. (Find this in your QuickBooks account settings under "Billing & Subscription"). Save and Activate**: Click "Save", then toggle the workflow to "Active". After activating, copy the new "Production URL" from the Webhook node. Customization Guide You can adapt this template for various workflows by tweaking a few nodes: Use a different Airtable Base:** Update the Base ID and Table ID in all Airtable nodes (Get Orders Records, Get Customer Details, Get Products, etc.). Switch from Sandbox to Live QuickBooks:** Replace the Sandbox company ID and endpoint in the “Create Invoice URL” node with your production QuickBooks company ID. Add more invoice details:** Edit the Code and Parse in HTTP nodes to include additional fields (like Tax, Shipping, or Notes). Support multiple currencies:** Add a “Currency” field mapping in both Airtable and QuickBooks nodes. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Ranjan Dailata
Who this is for This workflow is designed for: Automation engineers building AI-powered data pipelines Product managers & analysts needing structured insights from web pages Researchers & content teams extracting summaries from documentation or articles HR, compliance, and knowledge teams converting unstructured web content into structured records n8n self-hosted users leveraging advanced scraping and LLM enrichment It is ideal for anyone who wants to transform any public URL into structured data + clean summaries automatically. What problem this workflow solves Web content is often unstructured, verbose, and inconsistent, making it difficult to: Extract structured fields reliably Generate consistent summaries Reuse data across spreadsheets, dashboards, or databases Eliminate manual copy-paste and interpretation This workflow solves the problem of turning arbitrary web pages into machine-readable JSON and human-readable summaries, without custom scrapers or manual parsing logic. What this workflow does The workflow integrates Decodo, Google Gemini, and Google Sheets to perform automated extraction of structured data. Here’s how it works step-by-step: Input Setup The workflow begins when the user executes it manually or passes a valid URL. The input includes url. Profile Extraction with Decodo Accepts any valid URL as input Scrapes the page content using Decodo Uses Google Gemini to: Extract structured data in JSON format Generate a concise, factual summary Cleans and parses AI-generated JSON safely Merges structured data and summary output Stores the final result in Google Sheets for reporting or downstream automation JSON Parsing & Merging The Code Node cleans and parses the JSON output from the AI for reliable downstream use. The Merge Node combines both structured data and the AI-generated summary. Data Storage in Google Sheets The Google Sheets Node appends or updates the record, storing the structured JSON and summary into a connected spreadsheet. End Output A unified, machine-readable data in JSON + an executive-level summary suitable data analysis or downstream automation. Setup Instructions Prerequisites n8n account** with workflow editor access Decodo API credentials** - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard Google Gemini (PaLM) API access** Google Sheets OAuth credentials** Setup Steps Import the workflow into your n8n instance. Configure Credentials Add your Decodo API credentials in the Decodo node. Connect your Google Gemini (PaLM) credentials for both AI nodes. Authenticate your Google Sheets account. Edit Input Node In the Set the Input Fields node, replace the default URL with your desired profile or dynamic data source. Run the Workflow Trigger manually or via webhook integration for automation. Verify that structured profile data and summary are written to the linked Google Sheet. How to customize this workflow to your needs You can easily extend or adapt this workflow: Modify Structured Output Change the Gemini extraction prompt to match your own JSON schema Add required fields such as authors, dates, entities, or metadata Improve Summarization Adjust summary length or tone (technical, executive, simplified) Add multi-language summarization using Gemini Change Output Destination Replace Google Sheets with: Databases (Postgres, MySQL) Notion Slack / Email File storage (JSON, CSV) Add Validation or Filtering Insert IF nodes to: Reject incomplete data Detect errors or hallucinated output Trigger alerts for malformed JSON Scale the Workflow Replace manual trigger with: Webhook Scheduled trigger Batch URL processing Summary This workflow provides a powerful, generic solution for converting unstructured web pages into structured, AI-enriched datasets. By combining Decodo for scraping, Google Gemini for intelligence, and Google Sheets for persistence, it enables repeatable, scalable, and production-ready data extraction without custom scrapers or brittle parsing logic.
by Cheng Siong Chin
Introduction Automates flight deal discovery and intelligent analysis for travel bloggers and deal hunters. Scrapes live pricing, enriches with weather data, applies AI evaluation, and auto-publishes to WordPress—eliminating manual research and accelerating content delivery. How It Works User submits route via form, scrapes real-time flight prices and weather data, AI analyzes deal quality considering weather conditions, formats results, publishes to WordPress, sends Slack notification—fully automated from input to publication. Workflow Template Form Input → Extract Data → Scrape Flight Prices → Extract Pricing → Fetch Weather → Parse Weather → Prepare AI Input → AI Analysis → Parse Output → Format Results → Publish WordPress → Slack Alert → User Response Setup Instructions Form Setup: Configure user input fields for flight routes and preferences APIs: Connect Google Flights scraping endpoint, weather API credentials, OpenAI/Chat Model API key Publishing: Set WordPress credentials, target blog category, Slack webhook URL AI Configuration: Define analysis prompts, output structure, parser rules Workflow Steps Data Collection: Form captures route, scrapes Google Flights pricing, fetches destination weather via API AI Processing: Enriches flight data with weather context, analyzes deal quality using OpenAI/Chat Model with structured output parsing Publishing: Formats analysis results, creates WordPress post, sends Slack notification, delivers response to user Prerequisites n8n instance, Google Flights access, weather API key, OpenAI/compatible AI service, WordPress site with API access, Slack workspace Use Cases Travel blog automation, flight deal newsletters, price comparison services, seasonal travel planning, destination weather analysis, automated social media content Customization Modify AI analysis criteria, adjust weather impact weighting, customize WordPress post templates, add email distribution, integrate additional data sources, expand to hotel/rental deals Benefits Eliminates manual price checking, combines multiple data sources automatically, delivers AI-enhanced insights, accelerates publishing workflow, scales across unlimited routes, provides weather-aware recommendations
by PDF Vector
Overview Transform your accounts payable department with this enterprise-grade invoice processing solution. This workflow automates the entire invoice lifecycle - from document ingestion through payment processing. It handles invoices from multiple sources (Google Drive, email attachments, API submissions), extracts data using AI, validates against purchase orders, routes for appropriate approvals based on amount thresholds, and integrates seamlessly with your ERP system. The solution includes vendor master data management, duplicate invoice detection, real-time spend analytics, and complete audit trails for compliance. What You Can Do This comprehensive workflow creates an intelligent invoice processing pipeline that monitors multiple input channels (Google Drive, email, webhooks) for new invoices and automatically extracts data from PDFs, images, and scanned documents using AI. It validates vendor information against your master database, matches invoices to purchase orders, and detects discrepancies. The workflow implements multi-level approval routing based on invoice amount and department, prevents duplicate payments through intelligent matching algorithms, and integrates with QuickBooks, SAP, or other ERP systems. Additionally, it generates real-time dashboards showing processing metrics and cash flow insights while sending automated reminders for pending approvals. Who It's For Perfect for medium to large businesses, accounting departments, and financial service providers processing more than 100 invoices monthly across multiple vendors. Ideal for organizations that need to enforce approval hierarchies and spending limits, require integration with existing ERP/accounting systems, want to reduce processing time from days to minutes, need audit trails and compliance reporting, and seek to eliminate manual data entry errors and duplicate payments. The Problem It Solves Manual invoice processing creates significant operational challenges including data entry errors (3-5% error rate), processing delays (8-10 days per invoice), duplicate payments (0.1-0.5% of invoices), approval bottlenecks causing late fees, lack of visibility into pending invoices and cash commitments, and compliance issues from missing audit trails. This workflow reduces processing time by 80%, eliminates data entry errors, prevents duplicate payments, and provides complete visibility into your payables process. Setup Instructions Google Drive Setup: Create dedicated folders for invoice intake and configure access permissions PDF Vector Configuration: Set up API credentials with appropriate rate limits for your volume Database Setup: Deploy the provided schema for vendor master and invoice tracking tables Email Integration: Configure IMAP credentials for invoice email monitoring (optional) ERP Connection: Set up API access to your accounting system (QuickBooks, SAP, etc.) Approval Rules: Define approval thresholds and routing rules in the configuration node Notification Setup: Configure Slack/email for approval notifications and alerts Key Features Multi-Channel Invoice Ingestion**: Automatically collect invoices from Google Drive, email attachments, and API uploads Advanced OCR and AI Extraction**: Process any invoice format including handwritten notes and poor quality scans Vendor Master Integration**: Validate and enrich vendor data, maintaining a clean vendor database 3-Way Matching**: Automatically match invoices to purchase orders and goods receipts Dynamic Approval Routing**: Route based on amount, department, vendor, or custom rules Duplicate Detection**: Prevent duplicate payments using fuzzy matching algorithms Real-Time Analytics**: Track KPIs like processing time, approval delays, and early payment discounts Exception Handling**: Intelligent routing of problematic invoices for manual review Audit Trail**: Complete tracking of all actions, approvals, and system modifications Payment Scheduling**: Optimize payment timing to capture discounts and manage cash flow Customization Options This workflow can be customized to add industry-specific extraction fields, implement GL coding rules based on vendor or amount, create department-specific approval workflows, add currency conversion for international invoices, integrate with additional systems (banks, expense management), configure custom dashboards and reporting, set up vendor portals for invoice status inquiries, and implement machine learning for automatic GL coding suggestions. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Li CHEN
AWS News Analysis and LinkedIn Automation Pipeline Transform AWS industry news into engaging LinkedIn content with AI-powered analysis and automated approval workflows. Who's it for This template is perfect for: Cloud architects and DevOps engineers** who want to stay current with AWS developments Content creators** looking to automate their AWS news coverage Marketing teams** needing consistent, professional AWS content Technical leaders** who want to share industry insights on LinkedIn AWS consultants** building thought leadership through automated content How it works This workflow creates a comprehensive AWS news analysis and content generation pipeline with two main flows: Flow 1: News Collection and Analysis Scheduled RSS Monitoring: Automatically fetches latest AWS news from the official AWS RSS feed daily at 8 PM AI-Powered Analysis: Uses AWS Bedrock (Claude 3 Sonnet) to analyze each news item, extracting: Professional summary Key themes and keywords Importance rating (Low/Medium/High) Business impact assessment Structured Data Storage: Saves analyzed news to Feishu Bitable with approval status tracking Flow 2: LinkedIn Content Generation Manual Approval Trigger: Feishu automation sends approved news items to the webhook AI Content Creation: AWS Bedrock generates professional LinkedIn posts with: Attention-grabbing headlines Technical insights from a Solutions Architect perspective Business impact analysis Call-to-action engagement Automated Publishing: Posts directly to LinkedIn with relevant hashtags How to set up Prerequisites AWS Bedrock access** with Claude 3 Sonnet model enabled Feishu account** with Bitable access LinkedIn company account** with posting permissions n8n instance** (self-hosted or cloud) Detailed Configuration Steps 1. AWS Bedrock Setup Step 1: Enable Claude 3 Sonnet Model Log into your AWS Console Navigate to AWS Bedrock Go to Model access in the left sidebar Find Anthropic Claude 3 Sonnet and click Request model access Fill out the access request form (usually approved within minutes) Once approved, verify the model appears in your Model access list Step 2: Create IAM User and Credentials Go to IAM Console Click Users → Create user Name: n8n-bedrock-user Attach policy: AmazonBedrockFullAccess (or create custom policy with minimal permissions) Go to Security credentials tab → Create access key Choose Application running outside AWS Download the credentials CSV file Step 3: Configure in n8n In n8n, go to Credentials → Add credential Select AWS credential type Enter your Access Key ID and Secret Access Key Set Region to your preferred AWS region (e.g., us-east-1) Test the connection Useful Links: AWS Bedrock Documentation Claude 3 Sonnet Model Access AWS Bedrock Pricing 2. Feishu Bitable Configuration Step 1: Create Feishu Account and App Sign up at Feishu International Create a new Bitable (multi-dimensional table) Go to Developer Console → Create App Enable Bitable permissions in your app Generate App Token and App Secret Step 2: Create Bitable Structure Create a new Bitable with these columns: title (Text) pubDate (Date) summary (Long Text) keywords (Multi-select) rating (Single Select: Low, Medium, High) link (URL) approval_status (Single Select: Pending, Approved, Rejected) Get your App Token and Table ID: App Token: Found in app settings Table ID: Found in the Bitable URL (tbl...) Step 3: Set Up Automation In your Bitable, go to Automation → Create automation Trigger: When field value changes → Select approval_status field Condition: approval_status equals "Approved" Action: Send HTTP request Method: POST URL: Your n8n webhook URL (from Flow 2) Headers: Content-Type: application/json Body: {{record}} Step 4: Configure Feishu Credentials in n8n Install Feishu Lite community node (self-hosted only) Add Feishu credential with your App Token and App Secret Test the connection Useful Links: Feishu Developer Documentation Bitable API Reference Feishu Automation Guide 3. LinkedIn Company Account Setup Step 1: Create LinkedIn App Go to LinkedIn Developer Portal Click Create App Fill in app details: App name: AWS News Automation LinkedIn Page: Select your company page App logo: Upload your logo Legal agreement: Accept terms Step 2: Configure OAuth2 Settings In your app, go to Auth tab Add redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Request these scopes: w_member_social (Post on behalf of members) r_liteprofile (Read basic profile) r_emailaddress (Read email address) Step 3: Get Company Page Access Go to your LinkedIn Company Page Navigate to Admin tools → Manage admins Ensure you have Content admin or Super admin role Note your Company Page ID (found in page URL) Step 4: Configure LinkedIn Credentials in n8n Add LinkedIn OAuth2 credential Enter your Client ID and Client Secret Complete OAuth2 flow by clicking Connect my account Select your company page for posting Useful Links: LinkedIn Developer Portal LinkedIn API Documentation LinkedIn OAuth2 Guide 4. Workflow Activation Final Setup Steps: Import the workflow JSON into n8n Configure all credential connections: AWS Bedrock credentials Feishu credentials LinkedIn OAuth2 credentials Update webhook URL in Feishu automation to match your n8n instance Activate the scheduled trigger (daily at 8 PM) Test with manual webhook trigger using sample data Verify Feishu Bitable receives data Test approval workflow and LinkedIn posting Requirements Service Requirements AWS Bedrock** with Claude 3 Sonnet model access AWS account with Bedrock service enabled IAM user with Bedrock permissions Model access approval for Claude 3 Sonnet Feishu Bitable** for news storage and approval workflow Feishu account (International or Lark) Developer app with Bitable permissions Automation capabilities for webhook triggers LinkedIn Company Account** for automated posting LinkedIn company page with admin access LinkedIn Developer app with posting permissions OAuth2 authentication setup n8n community nodes**: Feishu Lite node (self-hosted only) Technical Requirements n8n instance** (self-hosted recommended for community nodes) Webhook endpoint** accessible from Feishu automation Internet connectivity** for API calls and RSS feeds Storage space** for workflow execution logs Cost Considerations AWS Bedrock**: ~$0.01-0.05 per news analysis Feishu**: Free tier available, paid plans for advanced features LinkedIn**: Free API access with rate limits n8n**: Self-hosted (free) or cloud subscription How to customize the workflow Content Customization Modify AI prompts** in the AI Agent nodes to change tone, focus, or target audience Adjust hashtags** in the LinkedIn posting node for different industries Change scheduling** frequency by modifying the Schedule Trigger settings Integration Options Replace LinkedIn** with Twitter/X, Facebook, or other social platforms Add Slack notifications** for approved content before posting Integrate with CRM** systems to track content performance Add content calendar** integration for better planning Advanced Features Multi-language support** by modifying AI prompts for different regions Content categorization** by adding tags for different AWS services Performance tracking** by integrating analytics platforms Team collaboration** by adding approval workflows with multiple reviewers Technical Modifications Change RSS sources** to monitor other AWS blogs or competitor news Adjust AI models** to use different Bedrock models or external APIs Add data validation** nodes for better error handling Implement retry logic** for failed API calls Important Notes Service Limitations This template uses community nodes (Feishu Lite) and requires self-hosted n8n Geo-restrictions** may apply to AWS Bedrock models in certain regions Rate limits** may affect high-frequency posting - adjust scheduling accordingly Content moderation** is recommended before automated posting Cost considerations**: Each AI analysis costs approximately $0.01-0.05 USD per news item Troubleshooting Common Issues AWS Bedrock Issues: Model not found**: Ensure Claude 3 Sonnet access is approved in your region Access denied**: Verify IAM permissions include Bedrock service access Rate limiting**: Implement retry logic or reduce analysis frequency Feishu Integration Issues: Authentication failed**: Check App Token and App Secret are correct Table not found**: Verify Table ID matches your Bitable URL Automation not triggering**: Ensure webhook URL is accessible and returns 200 status LinkedIn Posting Issues: OAuth2 errors**: Re-authenticate LinkedIn credentials Posting failed**: Verify company page admin permissions Rate limits**: LinkedIn has daily posting limits for company pages Security Best Practices Never hardcode credentials** in workflow nodes Use environment variables** for sensitive configuration Regularly rotate API keys** and access tokens Monitor API usage** to prevent unexpected charges Implement error handling** for failed API calls
by Rully Saputra
AI Job Matcher with Decodo, Gemini AI & Resume Analysis Sign up for Decodo — get better pricing here Who’s it for This workflow is built for job seekers, recruiters, founders, automation builders, and data engineers who want to automate job discovery and intelligently match job listings against resumes using AI. It’s ideal for anyone building job boards, candidate matching systems, hiring pipelines, or personal job alert automations using n8n. What this workflow does This workflow automatically scrapes job listings from SimplyHired using Decodo residential proxies, extracts structured job data with a Gemini AI agent, downloads resumes from Google Drive, extracts and summarizes resume content, and surfaces the most relevant job opportunities. The workflow stores structured results in a database and sends real-time notifications via Telegram, creating a scalable and low-maintenance AI-powered job matching pipeline. How it works A schedule trigger starts the workflow automatically Decodo fetches job search result pages from SimplyHired Job card HTML is extracted from the page A Gemini AI agent converts raw HTML into structured job data Resume PDFs are downloaded from Google Drive Resume text is extracted from PDF files A Gemini AI agent summarizes key resume highlights Job and resume data are stored in a database Matching job alerts are sent via Telegram How to set up Add your Decodo API credentials Add your Google Gemini API key Connect Google Drive for resume access Configure your Telegram bot Set up your database (Google Sheets by default) Update the job search URL with your keywords and location Requirements Self-hosted n8n instance Decodo account (community node) Google Gemini API access Google Drive access Telegram Bot token Google Sheets or another database > Note: This template uses a community node (Decodo) and is intended for self-hosted n8n only. How to customize the workflow Replace SimplyHired with another job board or aggregator Add job–resume matching or scoring logic Extend the resume summary with custom fields Swap Google Sheets for PostgreSQL, Supabase, or Airtable Route notifications to Slack, Email, or Webhooks Add pagination or multi-resume processing
by Colton Randolph
This n8n workflow automatically scrapes TechCrunch articles, filters for AI-related content using OpenAI, and delivers curated summaries to your Slack channels. Perfect for individuals or teams who need to stay current on artificial intelligence developments without manually browsing tech news sites. Who's it for AI product teams tracking industry developments and competitive moves Tech investors monitoring AI startup coverage and funding announcements Marketing teams following AI trends for content and positioning strategies Executives needing daily AI industry briefings without manual research overhead Development teams staying current on AI tools, frameworks, and breakthrough technologies How it works The workflow runs on a daily schedule, crawling a specificed amount of TechCrunch articles from the current year. Firecrawl extracts clean markdown content while bypassing anti-bot measures and handling JavaScript rendering automatically. Each article gets analyzed by an AI research assistant that determines if the content relates to artificial intelligence, machine learning, AI companies, or AI technology. Articles marked as "NOT_AI_RELATED" get filtered out automatically. For AI-relevant articles, OpenAI generates focused 3-bullet-point summaries that capture key insights. These summaries get delivered to your specified Slack channel with the original TechCrunch article title and source link for deeper reading. How to set up Configure Firecrawl: Add your Firecrawl API key to the HTTP Request node Set OpenAI credentials: Add your OpenAI API key to the AI Agent node Connect Slack: Configure your Slack webhook URL and target channel Adjust scheduling: Set your preferred trigger frequency (daily recommended) Test the workflow: Run manually to verify article extraction and Slack delivery Requirements Firecrawl account** with API access for TechCrunch web scraping OpenAI API key** for AI content analysis and summarization Slack workspace** with webhook permissions for message delivery n8n instance** (cloud or self-hosted) for workflow execution How to customize the workflow Source expansion: Modify the HTTP node URL to target additional tech publications beyond TechCrunch, or adjust the article limit and date filtering for different coverage needs. AI focus refinement: Update the OpenAI prompt to focus on specific AI verticals like generative AI, robotics, or ML infrastructure. Add company names or technology terms to the relevance filtering logic. Summary formats: Change from 3-bullet summaries to executive briefs, technical analyses, or competitive intelligence reports by modifying the OpenAI summarization prompt. Multi-channel delivery: Extend beyond Slack to email notifications, Microsoft Teams, or database storage for historical trend analysis and executive dashboards.
by Robert Breen
This n8n workflow template automatically processes phone interview transcripts using AI to evaluate candidates against specific criteria and saves the results to Google Sheets. Perfect for HR departments, recruitment agencies, or any business conducting phone screenings. What This Workflow Does This automated workflow: Receives phone interview transcripts via webhook Uses OpenAI GPT models to analyze candidate responses against predefined qualification criteria Extracts key information (name, phone, location, qualification status) Automatically saves structured results to a Google Sheet for easy review and follow-up The workflow is specifically designed for driving job interviews but can be easily adapted for any position with custom evaluation criteria. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered transcript analysis (GPT-4o-mini) Google Sheets** - Data storage and management Webhook** - Receiving transcript data Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI transcript processing Google Account - For Google Sheets integration Phone Interview System - That can send webhooks (like Vapi.ai) Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Create Your Google Sheet Option 1: Use Our Pre-Made Template (Recommended) Copy our template: Driver Interview Results Template Click "File" → "Make a copy" to create your own version Rename it as desired Copy your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Name it "Driver Interview Results" (or your preferred name) Set up the following column headers in row 1: A1: name B1: phone C1: cityState D1: qualifies E1: reasoning Copy the Google Sheet URL - you'll need this for the workflow Step 3: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on either "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration Click on the "Save to Google Sheets" node Set up Google Sheets OAuth2 credentials Select your spreadsheet from the dropdown Choose the correct sheet (usually "Sheet1") Update the Webhook Click on the "Webhook" node Note the webhook URL that n8n generates This URL will receive your transcript data Step 4: Customize Evaluation Criteria The workflow includes predefined criteria for a Massachusetts driving job. To customize for your needs: Click on the "Evaluate Candidate" node Modify the system message to include your specific requirements Update the evaluation criteria checklist Adjust the JSON output format if needed Current Evaluation Criteria: Valid Massachusetts driver's license No felony convictions Clean driving record (no recent tickets/accidents) Willing to complete background check Can pass drug test (including marijuana) Available full-time Monday-Friday Lives in Massachusetts Step 5: Connect to Vapi.ai (Phone Interview System) This workflow is specifically designed to work with Vapi.ai's phone interview system. Here's how to connect it: Setting Up the Vapi Integration Copy Your N8N Webhook URL In your n8n workflow, click on the "Webhook" node Copy the webhook URL (it should look like: https://your-n8n-instance.com/webhook-test/351ffe7c-69f2-4657-b593-c848d59205c0) Configure Your Vapi Assistant Log into your Vapi.ai dashboard Create or edit your phone interview assistant In the assistant settings, find the "Server" section Set the Server URL to your n8n webhook URL Set timeout to 20 seconds (as configured in the workflow) Configure Server Messages In your Vapi assistant settings, enable these server messages: end-of-call-report transcript[transcriptType="final"] Set Up the Interview Script Use the provided interview script in your Vapi assistant (found in the workflow's system message) This ensures consistent data collection for the AI evaluation Expected Data Format from Vapi The workflow expects Vapi to send data in this specific format: { "body": { "message": { "artifact": { "transcript": "AI: Hi. Are you interested in driving for Bank of Transport?\nUser: Yes.\nAI: Great. Before we go further..." } } } } Vapi Configuration Checklist ✅ Webhook URL set in Vapi assistant server settings ✅ Server messages enabled: end-of-call-report, transcript[transcriptType="final"] ✅ Interview script configured in assistant ✅ Assistant set to send webhooks on call completion Alternative Phone Systems If you're not using Vapi.ai, you can adapt this workflow for other phone systems by: Modifying the "Edit Fields2" node to extract transcripts from your system's data format Updating the webhook data structure expectations Ensuring your phone system sends the complete interview transcript Step 6: Test the Workflow Test with Sample Data Use the "Execute Workflow" button with test data Verify that data appears correctly in your Google Sheet Check that the AI evaluation logic works as expected End-to-End Testing Send a test webhook with a real transcript Monitor each step of the workflow Confirm the final result is saved to Google Sheets Workflow Node Breakdown Webhook - Receives transcript data from your phone system Edit Fields2 - Extracts the transcript from the incoming data Evaluate Candidate - AI analysis using GPT-4o-mini to assess qualification Convert to JSON - Ensures proper JSON formatting with structured output parser Save to Google Sheets - Automatically logs results to your spreadsheet Customization Options Modify Evaluation Criteria Edit the system prompt in the "Evaluate Candidate" node Add or remove qualification requirements Adjust the scoring logic Change Output Format Modify the JSON schema in the "Structured Output Parser" node Update Google Sheets column mapping accordingly Add Additional Processing Insert nodes for email notifications Add Slack/Discord alerts for qualified candidates Integrate with your CRM or ATS system Troubleshooting Common Issues: OpenAI API Errors**: Check API key validity and billing status Google Sheets Not Updating**: Verify OAuth permissions and sheet access Webhook Not Receiving Data**: Confirm URL and POST format from your phone system AI Evaluation Inconsistencies**: Refine the system prompt with more specific criteria Usage Tips Monitor Token Usage**: OpenAI charges per token, so monitor your usage Regular Review**: Periodically review AI evaluations for accuracy Backup Data**: Export Google Sheets data regularly for backup Privacy Compliance**: Ensure transcript handling complies with local privacy laws Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: rbreen@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-interactive Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
by vinci-king-01
Sales Pipeline Automation Dashboard with AI Lead Intelligence 🎯 Target Audience Sales managers and team leads Business development representatives Marketing teams managing lead generation CRM administrators and sales operations Account executives and sales representatives Sales enablement professionals Revenue operations (RevOps) teams 🚀 Problem Statement Manual lead qualification and sales pipeline management is inefficient and often leads to missed opportunities or poor lead prioritization. This template solves the challenge of automatically scoring, qualifying, and routing leads using AI-powered intelligence to maximize conversion rates and sales team productivity. 🔧 How it Works This workflow automatically processes new leads using AI-powered intelligence, scores and qualifies them based on multiple factors, and automates the entire sales pipeline from lead capture to deal creation. Key Components Dual Trigger System - Scheduled monitoring and webhook triggers for real-time lead processing AI-Powered Lead Intelligence - Advanced scoring algorithm based on 7 key factors Multi-Source Data Enrichment - LinkedIn and Crunchbase integration for comprehensive lead profiles Automated Sales Actions - Intelligent routing, task creation, and follow-up sequences Multi-Platform Integration - HubSpot CRM, Slack notifications, and Google Sheets dashboard 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the lead was processed | "2024-01-15T10:30:00Z" | | lead_id | String | Unique lead identifier | "LEAD-2024-001234" | | first_name | String | Lead's first name | "John" | | last_name | String | Lead's last name | "Smith" | | email | String | Lead's email address | "john@company.com" | | company_name | String | Company name | "Acme Corp" | | job_title | String | Lead's job title | "Marketing Director" | | lead_score | Number | AI-calculated score (0-100) | 85 | | grade | String | Lead grade (A+, A, B+, B, C+) | "A+" | | category | String | Lead category | "Enterprise" | | priority | String | Priority level | "Critical" | | lead_source | String | How the lead was acquired | "Website Form" | | assigned_rep | String | Assigned sales representative | "Senior AE" | | company_size | String | Company employee count | "201-500 employees" | | industry | String | Company industry | "Technology" | | funding_stage | String | Company funding stage | "Series B" | | estimated_value | String | Estimated deal value | "$50K-100K" | 🛠️ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials HubSpot CRM account with API access Google Sheets account with API access Slack workspace for notifications (optional) Email service for welcome emails (optional) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up HubSpot CRM Integration Add HubSpot API credentials Grant necessary permissions for contacts, deals, and tasks Configure custom properties for lead scoring and qualification Test the connection to ensure it's working 4. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for sales pipeline data Configure the sheet name (default: "Sales Pipeline") 5. Configure Lead Scoring Parameters Update the lead scoring weights in the Code node Customize ideal customer profile criteria Set automation trigger thresholds Adjust sales rep assignment logic 6. Set up Notification Channels Configure Slack webhook or API credentials Set up email service credentials for welcome emails Define notification preferences for different lead grades Test notification delivery 7. Configure Triggers Set up webhook endpoint for real-time lead capture Configure scheduled trigger for periodic monitoring Choose appropriate time zones for your business hours Test both trigger mechanisms 8. Test and Validate Run the workflow manually with sample lead data Check HubSpot for proper contact and deal creation Verify Google Sheets data formatting Test all notification channels 🔄 Workflow Customization Options Modify Lead Scoring Algorithm Adjust scoring weights for different factors Add new scoring criteria (geographic location, technology stack, etc.) Customize ideal customer profile parameters Implement industry-specific scoring models Extend Data Enrichment Add more data sources (ZoomInfo, Apollo, etc.) Include social media presence analysis Add technographic data collection Implement intent signal detection Customize Sales Automation Modify follow-up sequences for different lead categories Add more sophisticated sales rep assignment logic Implement territory-based routing Add automated meeting scheduling Output Customization Add data visualization and reporting features Implement sales pipeline analytics Create executive dashboards with key metrics Add conversion rate tracking and analysis 📈 Use Cases Lead Qualification**: Automatically score and qualify incoming leads Sales Pipeline Management**: Streamline the entire sales process Lead Routing**: Intelligently assign leads to appropriate sales reps Follow-up Automation**: Ensure consistent and timely follow-up Sales Intelligence**: Provide comprehensive lead insights Performance Tracking**: Monitor sales team and pipeline performance 🚨 Important Notes Respect LinkedIn and Crunchbase terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update your lead scoring parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Ensure GDPR compliance for lead data processing 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status HubSpot API errors: Check API key and permissions Google Sheets permission errors: Check OAuth2 scope and permissions Lead scoring errors: Review the Code node's JavaScript logic Rate limiting: Adjust request frequency and implement delays Support Resources: ScrapeGraphAI documentation and API reference HubSpot API documentation and developer resources n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Sales automation best practices and guidelines
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Obsidi8n
How it Works This n8n template makes it possible to send emails directly from your Obsidian notes. It leverages the power of the Obsidian Post Webhook plugin, allowing seamless integration between your notes and the email workflow. What it does: Receives note content and metadata from Obsidian via a Webhook. Parses YAML frontmatter to define email recipients, subject, and more. Automatically processes attachments, encoding them into an email-friendly format. Sends emails via Gmail and confirms the status back to Obsidian. Includes a testing feature to verify everything works before going live. Set-up Steps Webhook Configuration: Set your n8n POST Webhook URL in the Obsidian Obsidian Post Webhook plugin settings. Email Integration: Submit the Gmail credentials in n8n email nodes. Test the Workflow: Run a test from Obsidian to ensure the template functions correctly. Activate and Enjoy: Start sending customized emails with attachments from your notes in no time!
by CustomJS
This n8n template shows how to extract selected pages from a generated PDF with the PDF Toolkit by www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Extract** pages from the PDF file as needed. Requirements Self-hosted** n8n instance CustomJS API key** for extracting PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. Download PDF File: Pass urls for PDF files to merge. Extract Pages from PDF: Extract selected pages from a generated PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Extract Pages from PDF. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Taking a note of specific pages from PDF files. Splitting PDF file into multiple parts.