by PDF Vector
Overview Transform your accounts payable department with this enterprise-grade invoice processing solution. This workflow automates the entire invoice lifecycle - from document ingestion through payment processing. It handles invoices from multiple sources (Google Drive, email attachments, API submissions), extracts data using AI, validates against purchase orders, routes for appropriate approvals based on amount thresholds, and integrates seamlessly with your ERP system. The solution includes vendor master data management, duplicate invoice detection, real-time spend analytics, and complete audit trails for compliance. What You Can Do This comprehensive workflow creates an intelligent invoice processing pipeline that monitors multiple input channels (Google Drive, email, webhooks) for new invoices and automatically extracts data from PDFs, images, and scanned documents using AI. It validates vendor information against your master database, matches invoices to purchase orders, and detects discrepancies. The workflow implements multi-level approval routing based on invoice amount and department, prevents duplicate payments through intelligent matching algorithms, and integrates with QuickBooks, SAP, or other ERP systems. Additionally, it generates real-time dashboards showing processing metrics and cash flow insights while sending automated reminders for pending approvals. Who It's For Perfect for medium to large businesses, accounting departments, and financial service providers processing more than 100 invoices monthly across multiple vendors. Ideal for organizations that need to enforce approval hierarchies and spending limits, require integration with existing ERP/accounting systems, want to reduce processing time from days to minutes, need audit trails and compliance reporting, and seek to eliminate manual data entry errors and duplicate payments. The Problem It Solves Manual invoice processing creates significant operational challenges including data entry errors (3-5% error rate), processing delays (8-10 days per invoice), duplicate payments (0.1-0.5% of invoices), approval bottlenecks causing late fees, lack of visibility into pending invoices and cash commitments, and compliance issues from missing audit trails. This workflow reduces processing time by 80%, eliminates data entry errors, prevents duplicate payments, and provides complete visibility into your payables process. Setup Instructions Google Drive Setup: Create dedicated folders for invoice intake and configure access permissions PDF Vector Configuration: Set up API credentials with appropriate rate limits for your volume Database Setup: Deploy the provided schema for vendor master and invoice tracking tables Email Integration: Configure IMAP credentials for invoice email monitoring (optional) ERP Connection: Set up API access to your accounting system (QuickBooks, SAP, etc.) Approval Rules: Define approval thresholds and routing rules in the configuration node Notification Setup: Configure Slack/email for approval notifications and alerts Key Features Multi-Channel Invoice Ingestion**: Automatically collect invoices from Google Drive, email attachments, and API uploads Advanced OCR and AI Extraction**: Process any invoice format including handwritten notes and poor quality scans Vendor Master Integration**: Validate and enrich vendor data, maintaining a clean vendor database 3-Way Matching**: Automatically match invoices to purchase orders and goods receipts Dynamic Approval Routing**: Route based on amount, department, vendor, or custom rules Duplicate Detection**: Prevent duplicate payments using fuzzy matching algorithms Real-Time Analytics**: Track KPIs like processing time, approval delays, and early payment discounts Exception Handling**: Intelligent routing of problematic invoices for manual review Audit Trail**: Complete tracking of all actions, approvals, and system modifications Payment Scheduling**: Optimize payment timing to capture discounts and manage cash flow Customization Options This workflow can be customized to add industry-specific extraction fields, implement GL coding rules based on vendor or amount, create department-specific approval workflows, add currency conversion for international invoices, integrate with additional systems (banks, expense management), configure custom dashboards and reporting, set up vendor portals for invoice status inquiries, and implement machine learning for automatic GL coding suggestions. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by David Olusola
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. WordPress to Blotato Social Publisher Overview: This automation monitors your WordPress site for new posts and automatically creates platform-specific social media content using AI, then posts to Twitter, LinkedIn, and Facebook via Blotato. What it does: Monitors WordPress site for new posts every 30 minutes Filters posts published in the last hour to avoid duplicates Processes each new post individually AI generates optimized content for each social platform (Twitter, LinkedIn, Facebook) Extracts platform-specific content from AI response Publishes to all three social media platforms via Blotato API Setup Required: WordPress Connection Configure WordPress credentials in the "Check New Posts" node Enter your WordPress site URL, username, and password/app password Blotato Social Media API Setup Get your Blotato API key from your Blotato account Configure API credentials in the Blotato connection node Map each platform (Twitter, LinkedIn, Facebook) to the correct Blotato channel AI Configuration Set up Google Gemini API credentials Connect the Gemini model to the "AI Social Content Creator" node Customization Options Posting Frequency: Modify schedule trigger (default: every 30 minutes) Content Tone: Adjust AI system message for different writing styles Post Filtering: Change time window in WordPress node (default: last hour) Platform Selection: Remove any social media platforms you don’t want to use Testing Run workflow manually to test connections Verify posts appear correctly on all platforms Monitor for API rate limit issues Features: Platform-optimized content (hashtags, character limits, professional tone) Duplicate prevention system Batch processing for multiple posts Featured image support Customizable posting frequency Customization: Change monitoring frequency Adjust AI prompts for different tones Add/remove social platforms Modify hashtag strategies Need Help? For n8n coaching or one-on-one consultation
by vinci-king-01
How it works This workflow automatically analyzes website visitors in real-time, enriches their data with company intelligence, and provides lead scoring and sales alerts. Key Steps Webhook Trigger - Receives visitor data from your website tracking system. AI-Powered Company Intelligence - Uses ScrapeGraphAI to extract comprehensive company information from visitor domains. Visitor Enrichment - Combines visitor behavior data with company intelligence to create detailed visitor profiles. Lead Scoring - Automatically scores leads based on company size, industry, engagement, and intent signals. CRM Integration - Updates your CRM with enriched visitor data and lead scores. Sales Alerts - Sends real-time notifications to your sales team for high-priority leads. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for company intelligence gathering. Set up HubSpot connection - Connect your HubSpot CRM to automatically update contact records. Configure Slack integration - Set up your Slack workspace and specify the sales alert channel. Customize lead scoring criteria - Adjust the scoring algorithm to match your target customer profile. Set up website tracking - Configure your website to send visitor data to the webhook endpoint. Test the workflow - Verify all integrations are working correctly with a test visitor. Key Features Real-time visitor analysis** with company intelligence enrichment Automated lead scoring** based on multiple factors (company size, industry, engagement) Intent signal detection** (pricing interest, demo requests, contact intent) Priority-based sales alerts** with recommended actions CRM integration** for seamless lead management Deal size estimation** based on company characteristics
by Luis Hernandez
Overview This comprehensive n8n workflow automates the generation and distribution of detailed monthly technical support reports from GLPI (IT Service Management platform). The workflow intelligently calculates SLA compliance, analyzes technician performance, and delivers professionally formatted HTML reports via email. ✨ Key Features Intelligent SLA Calculation Business Hours Tracking: Automatically calculates resolution time considering only working hours (excludes weekends and lunch breaks) Configurable Schedule: Customizable work hours (default: 8 AM - 12 PM, 1 PM - 6 PM) Dynamic SLA Monitoring: Real-time compliance tracking with configurable thresholds (default: 24 hours) Visual Indicators: Color-coded alerts for critical SLA breaches and high-volume warnings Comprehensive Reporting General Summary: Total cases, open, in-progress, resolved, and closed tickets Performance Metrics: Total and average resolution hours in both decimal and formatted (hours/minutes) display Technician Breakdown: Individual performance analysis per technician including case distribution and SLA compliance Smart Alerts: Automatic warnings for high case volumes (>100 in-progress) and critical SLA levels (<50%) Professional Email Delivery Responsive HTML Design: Mobile-optimized email templates with elegant styling Dynamic Content: Conditional formatting based on performance metrics Automatic Scheduling: Monthly execution on the 6th day to ensure accurate SLA measurement 💼 Business Benefits Time Savings Eliminates Manual Work: Saves 2-4 hours per month previously spent compiling reports manually Automated Data Collection: No more exporting CSVs or copying data between systems One-Click Setup: Configure once and receive reports automatically every month Improved Decision Making Real-Time Insights: Identify bottlenecks and performance issues immediately Technician Accountability: Clear visibility into individual and team performance SLA Compliance Tracking: Proactively manage service level agreements before they become critical Enhanced Communication Stakeholder Ready: Professional reports suitable for management presentations Consistent Format: Standardized metrics ensure month-over-month comparability Instant Distribution: Automatic email delivery to relevant stakeholders 🔧 Technical Specifications Requirements n8n instance (self-hosted or cloud) GLPI server with API access enabled Gmail account (or any SMTP-compatible email service) GLPI API credentials (App-Token and User credentials) Configuration Points Variables Node: Server URL, API tokens, entity name, work hours, SLA limits Schedule Trigger: Monthly execution timing (default: 6th of each month) Email Recipient: Target email address for report delivery Date Range Logic: Automatic previous month calculation Data Processing Retrieves up to 999 tickets per execution (configurable) Filters by entity and date range Excludes weekends and non-business hours from calculations Groups data by technician for detailed analysis 📋 Setup Instructions Prerequisites GLPI Configuration: Enable API and configure the Tickets panel with required fields (ID, -Title, Status, Opening Date, Closing Date, Resolution Date, Priority, Requester, Assigned To) API Credentials: Create Basic Auth credentials in n8n for GLPI API access Email Authentication: Set up Gmail OAuth2 or SMTP credentials in n8n Implementation Steps Import the workflow JSON into your n8n instance Configure the Variables node with your GLPI server details and business hours Set up GLPI API credentials in the HTTP Request nodes Configure email credentials in the Gmail node Update the recipient email address Test the workflow manually before enabling the schedule Activate the workflow for automatic monthly execution 🎯 Use Cases IT Support Teams: Track helpdesk performance and SLA compliance Service Managers: Monitor team productivity and identify training needs Executive Reporting: Provide high-level summaries to stakeholders Resource Planning: Identify workload distribution and capacity issues Compliance Auditing: Maintain historical records of SLA performance 📈 ROI Impact Time Savings: 24-48 hours annually in manual reporting eliminated Error Reduction: Eliminates human calculation errors in SLA tracking Faster Response: Early alerts enable proactive issue resolution Better Visibility: Data-driven insights improve team management
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Mirai
Icebreaker Generator powered with ChatGPT This n8n template crawls a company website, distills the content with AI, and produces a short, personalized icebreaker you can drop straight into your cold emails or CRM. Perfect for SDRs, founders, and agencies who want “real research” at scale. Good to know Works from a Google Sheet of leads (domain + LinkedIn, etc.). Handles common scrape failures gracefully and marks the lead’s Status as Error. Uses ChatGPT to summarize pages and craft one concise, non-generic opener. Output is written back to the same Google Sheet (IceBreaker, Status). You’ll need Google credentials (for Sheets) and OpenAI credentials (for GPT). How it works Step 1 — Discover internal pages Reads a lead’s website from Google Sheets. Scrapes the home page and extracts all links. A Code node cleans the list (removes emails/anchors/social/external domains, normalizes paths, de-duplicates) and returns unique internal URLs. If the home page is unreachable or no links are found, the lead is marked Error and the workflow moves on. Step 2 — Convert pages to text Visits each collected URL and converts the response into HTML/Markdown text for analysis. You can cap depth/amount with the Limit node. Step 3 — Summarize & generate the icebreaker A GPT node produces a two-paragraph abstract for each page (JSON output). An Aggregate node merges all abstracts for the company. Another GPT node turns the merged summary into a personalized, multi-line icebreaker (spartan tone, non-obvious details). The result is written back to Google Sheets (IceBreaker = ..., Status = Done). The workflow loops to the next lead. How to use Prepare your sheet Include at least: organization_website_url, linkedin_url, and any other lead fields you track. Keep an empty IceBreaker and Status column for the workflow to fill. Connect credentials Google Sheets: use the Google account that owns the sheet and link it in the nodes. OpenAI: add your API key to the GPT nodes (“Summarize Website Page”, “Generate Multiline Icebreaker”). Run the workflow Start with the Manual Trigger (or replace with a schedule/webhook). Adjust Limit if you want fewer/more pages per company. Watch Status (Done/Error) and IceBreaker populate in your sheet. Requirements n8n instance Google Sheets account & access to the leads sheet OpenAI API key (for summarization + icebreaker generation) Customizing this workflow Tone & format: tweak the prompts (both GPT nodes) to match your brand voice and structure. Depth: change the Limit node to scan more/less pages; add simple rules to prioritize certain paths (e.g., /about, /blog/*). Fields: write additional outputs (e.g., Company Summary, Key Products, Recent News) back to new sheet columns. Lead selection: filter rows by Status = "" (or custom flags) to only process untouched leads. Error handling: expand the Error branch to retry with www./HTTP→HTTPS or to log diagnostics in a separate tab. Tips Keep icebreakers short, specific, and free of clichés—small, non-obvious details from the site convert best. Start with a small batch to validate quality, then scale up. Consider adding a rate limit if target sites throttle requests. In short: Sheet → crawl internal pages → AI abstracts → single tailored icebreaker → write back to the sheet, then repeat for the next lead. This automation can work great with our automation for automated cold emailing.
by Andrey
Overview This n8n workflow automates brand monitoring across social media platforms (Reddit, LinkedIn, X, and Instagram) using the AnySite API. It searches posts mentioning your defined keywords, stores results in n8n Data Tables, analyzes engagement and sentiment, and generates a detailed AI-powered social media report automatically sent to your email. Key Features Multi-Platform Monitoring:** Reddit, LinkedIn, X (Twitter), and Instagram Automated Post Collection:** Searches for new posts containing tracked keywords Data Persistence:** Saves all posts and comments in structured Data Tables AI-Powered Reporting:** Uses GPT (OpenAI API) to summarize and analyze trends, engagement, and risks Automated Email Delivery:** Sends comprehensive daily/weekly reports via Gmail Comment Extraction:** Collects and formats post comments for deeper sentiment analysis Scheduling Support:** Can be executed manually or automatically (e.g., every night) How It Works Triggers The workflow runs: Automatically (via Schedule Trigger) — e.g., once daily Manually (via Manual Trigger) — for testing or on-demand analysis Data Collection Process Keyword Loading: Reads all keywords from the Data Table “Brand Monitoring Words” Social Media Search: For each keyword, the workflow calls the AnySite API endpoints: api/reddit/search/posts api/linkedin/search/posts api/twitter/search/posts (X) api/instagram/search/posts Deduplication: Before saving, checks if a post already exists in the “Brand Monitoring Posts” table. Data Storage: Inserts new posts into the Data Table with fields like type, title, url, vote_count, comment_count, etc. Comments Enrichment: For Reddit and LinkedIn, retrieves and formats comments into JSON strings, then updates the record. AI Analysis & Report Generation: The AI Agent (OpenAI GPT model) aggregates posts, analyzes sentiment, engagement, risks, and generates a structured HTML email report. Email Sending: Sends the final report via Gmail using your connected account. Setup Instructions Requirements Self-hosted or cloud n8n instance AnySite API key** – https://AnySite.io OpenAI API key** (GPT-4o or later) Connected Gmail account (for report delivery) Installation Steps Import the workflow Import the provided file: Social Media Monitoring.json Configure credentials AnySite API: Add access-token header with your API key OpenAI: Add your OpenAI API key in the “OpenAI Chat Model” node Gmail: Connect your Gmail account (OAuth2) in the “Send a message in Gmail” node Create required Data Tables 1️⃣ Brand Monitoring Words | Field | Type | Description | |-------|------|-------------| | word | string | Keyword or brand name to monitor | > Each row represents a single keyword to be tracked. 2️⃣ Brand Monitoring Posts | Field | Type | Description | |-------|------|-------------| | type | string | Platform type (e.g., reddit, linkedin, x, instagram) | | title | string | Post title or headline | | url | string | Direct link to post | | created_at | string | Post creation date/time | | subreddit_id | string | (Reddit only) subreddit ID | | subreddit_alias | string | (Reddit only) subreddit alias | | subreddit_url | string | (Reddit only) subreddit URL | | subreddit_description | string | (Reddit only) subreddit description | | comment_count | number | Number of comments | | vote_count | number | Votes, likes, or reactions count | | subreddit_member_count | number | (Reddit only) member count | | post_id | string | Unique post identifier | | text | string | Post body text | | comments | string | Serialized comments (JSON string) | | word | string | Matched keyword that triggered capture | AI Reporting Logic Collects all posts gathered during the run Aggregates by keyword and platform Evaluates sentiment, engagement, and risk signals Summarizes findings with an executive summary and key metrics Sends the Social Media Intelligence Report to your configured email Customization Options Schedule:** Adjust the trigger frequency (daily, hourly, etc.) Keywords:* Add or remove keywords in the *Brand Monitoring Words** table Report Depth:** Modify system prompts in the “AI Agent” node to customize tone and analysis focus Email Recipient:** Change the target email address in the “Send a message in Gmail” node Troubleshooting | Issue | Solution | |-------|-----------| | No posts found | Check AnySite API key and keyword relevance | | Duplicate posts | Verify Data Table deduplication setup | | Report not sent | Confirm Gmail OAuth2 connection | | AI Agent error | Ensure OpenAI API key and model selection are correct | Best Practices Use specific brand or product names in keywords for better precision Run the workflow daily to maintain fresh insights Periodically review and clean Data Tables Adjust AI prompt parameters to refine analytical tone Review AI-generated reports to ensure data quality Author Notes Created for automated cross-platform brand reputation monitoring, enabling real-time insights into how your brand is discussed online.
by Shahzaib Anwar
📌 Overview This workflow automatically processes incoming Shopify/Gmail leads and pushes them into HubSpot as both Contacts and Deals. It helps sales and marketing teams capture leads instantly, enrich CRM data, and avoid missed opportunities. ⚡ How it works Trigger: Watches for new emails in Gmail. Extract Data: Parses email body (Name, Email, City, Phone, Message, Product URL/Title). Condition: Checks if sender is Shopify before processing. HubSpot: Creates/updates a Contact with customer details. Creates a Deal associated with that contact. 🎯 Benefits 📥 Automates lead capture → CRM 🚫 Eliminates manual copy-paste from Gmail 🔄 Real-time sync between Gmail and HubSpot 📈 Improves sales follow-up speed and accuracy 🛠 Setup Steps Import this workflow into your n8n instance. Connect your Gmail and HubSpot credentials. Replace the HubSpot Deal Stage ID with your own pipeline stage. (Optional) Adjust the Code Node regex if your email format differs. Activate the workflow and test with a sample lead email. 📝 Example Email Format Name: John Doe Email: john@example.com City: London Phone: +44 7000 000000 Body: Interested in product Product Url: https://example.com/product Product Title: Sample Product sticky_notes: name: Gmail Trigger note: > 📧 Watches for new emails in Gmail. Polls every minute and passes email data into the flow. name: Get a Message note: > 📩 Fetches the full Gmail message content (body + metadata) for parsing. name: Extract From Email note: > 🔍 Extracts the sender’s email address from Gmail to identify the source. name: If Sender is Shopify note: > ✅ Condition node that ensures only Shopify-originated emails/leads are processed. name: Code Node (Regex Parser) note: > 🧾 Parses the email body using regex to extract Name, Email, City, Phone, Message, Product URL, and Title. name: Edit Fields (Set Node) note: > 📝 Cleans and structures the extracted fields into proper JSON format before sending to HubSpot. name: HubSpot → Create/Update Contact note: > 👤 Creates or updates a HubSpot Contact with the extracted lead details. name: HubSpot → Create Deal note: > 💼 Creates a HubSpot Deal linked to the Contact, including campaign/product information.
by Dr. Firas
💥 Generate UGC Promo Videos with Blotato and Sora 2 for eCommerce 🧩 Who is this for? This workflow is perfect for eCommerce brands, content creators, and marketing teams who want to automatically generate short, eye-catching videos from their product images — without editing software or manual work. 🚀 What problem does this workflow solve? Creating engaging promotional videos manually can be time-consuming and expensive. This automation eliminates that friction by combining Blotato, Sora 2, and AI scripting to turn static product images into dynamic UGC-style videos ready for TikTok, Instagram Reels, and YouTube Shorts. ⚙️ What this workflow does This workflow: Receives a product image directly from Telegram or another input source. Analyzes the image with OpenAI Vision to understand the product’s features and audience. Generates a natural, short UGC-style script using GPT-based AI. Sends the image and script to Sora 2 via the Fal API to generate a vertical promotional video. Monitors the video status every 15 seconds until completion. Downloads or automatically publishes the final video to your social platforms. 🧠 Setup Create a Fal.ai API key and set it in your n8n credentials (Authorization: Key YOUR_FAL_KEY). Connect your Telegram, OpenAI, and HTTP Request nodes as shown in the workflow. Make sure the Build Public Image URL node outputs a valid, public image link. In the HTTP Request node for Sora 2, set: Method: POST URL: https://fal.run/fal-ai/sora-2/image-to-video Headers: Authorization: Key YOUR_FAL_KEY Content-Type: application/json Body: Raw JSON with parameters like prompt, image_url, duration, and aspect_ratio. Run the workflow and monitor the execution logs for your video URL. Blotato → API key for social media publishing 🎨 How to customize this workflow to your needs 🧾 Change the video tone: Edit the OpenAI prompt to produce educational, emotional, or luxury-style scripts. 🎬 Adjust duration or format: Use Sora 2’s supported durations (4, 8, or 12 seconds) and aspect ratios (e.g., 9:16 for social media). 📲 Auto-publish your videos: Connect the TikTok, Instagram, or YouTube upload nodes for full automation. ✨ Add branding: Include overlays, logos, or end screens via CapCut or an external API integration. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by n8n Automation Expert | Template Creator | 2+ Years Experience
🚀 Transform Your Job Hunt with AI-Powered Telegram Bot Turn job searching into a conversational experience! This intelligent Telegram bot automatically scrapes job postings from LinkedIn, Indeed, and Monster, filters for sales & marketing positions, and delivers personalized results directly to your chat. ✨ Key Features Interactive Telegram Commands**: Simple /jobs [keyword] [location] searches Multi-Platform Scraping**: Simultaneous data collection from 3 major job boards AI-Powered Filtering**: Smart relevance detection and experience level classification Real-Time Notifications**: Instant job alerts delivered to Telegram Automated Data Storage**: Saves results to Google Sheets and Airtable Duplicate Removal**: Advanced deduplication across platforms Mobile-First Experience**: Full job search functionality through Telegram 🎯 Perfect For Sales Professionals**: Account managers, sales representatives, business development Marketing Experts**: Digital marketers, marketing managers, growth specialists Recruiters**: Streamlined candidate sourcing and job market analysis Job Seekers**: Hands-free job discovery with instant notifications 🛠️ Setup Requirements Required Credentials: Telegram Bot Token**: Create bot via @BotFather Bright Data API**: Professional web scraping service (LinkedIn/Indeed datasets) Google Sheets OAuth2**: For spreadsheet integration Airtable Token**: Database storage and management Prerequisites: n8n instance with HTTPS enabled (required for Telegram webhooks) Valid domain name with SSL certificate Basic understanding of Telegram bot commands 🔧 How It Works User Experience: Send /start to activate the bot and see available commands Use /jobs sales manager New York to search for specific positions Receive formatted job results instantly in Telegram Click "Apply Now" links to go directly to job postings All jobs automatically saved to your connected spreadsheets Behind the Scenes: Command Processing: Bot parses user input for keywords and location Parallel Scraping: Simultaneous API calls to LinkedIn, Indeed, and Monster AI Processing: Intelligent filtering, experience level detection, remote work identification Data Enhancement: Salary extraction, duplicate removal, relevance scoring Multi-Format Storage: Automatic saving to Google Sheets, Airtable, and JSON export Real-Time Response: Formatted results delivered back to Telegram chat 🎨 Telegram Bot Commands /start - Welcome message and command overview /jobs [keyword] [location] - Search for jobs (e.g., /jobs marketing manager remote) /help - Show detailed help information /status - Check bot status and recent activity 📊 Sample Output The bot delivers beautifully formatted job results: 🎯 Job Search Results 🎯 Found 7 relevant opportunities Platforms: linkedin, indeed, monster Remote jobs: 3 ─────────────────── 💼 Senior Sales Manager 🏢 TechCorp Industries 📍 New York, NY 💰 $80,000 - $120,000 🌐 Remote Available 📊 senior level 🔗 Apply Now 🔒 Security & Best Practices Rate Limiting**: Built-in Telegram API compliance (30 requests/second) Error Handling**: Graceful failure recovery with user-friendly messages Input Validation**: Sanitized user input to prevent injection attacks Credential Management**: Secure API key storage using n8n credentials system HTTPS Enforcement**: Required for production Telegram webhook integration 📈 Benefits & ROI 95% Time Reduction**: Automated job discovery vs manual searching Multi-Source Coverage**: Access 3 major job platforms simultaneously Mobile Accessibility**: Search jobs anywhere using Telegram mobile app Real-Time Alerts**: Never miss new opportunities with instant notifications Data Organization**: Automatic spreadsheet management for job tracking Market Intelligence**: Comprehensive job market analysis and trends 🚀 Advanced Customization Custom Keywords**: Modify filtering logic for specific industries Location Targeting**: Adjust geographic search parameters Experience Levels**: Fine-tune senior/mid/entry level detection Additional Platforms**: Easily add more job boards via HTTP requests Notification Scheduling**: Set up periodic automated job alerts Team Integration**: Deploy for multiple users or team channels 💡 Use Cases Individual Job Seekers**: Personal job hunting assistant Recruitment Agencies**: Streamlined candidate sourcing Sales Teams**: Territory-specific opportunity monitoring Marketing Departments**: Industry trend analysis and competitor tracking Career Coaches**: Client job market research and opportunity identification Ready to revolutionize your job search? Deploy this workflow and start receiving personalized job opportunities directly in Telegram!
by Meak
LinkedIn Job-Based Cold Email System Most outreach tools rely on generic lead lists and recycled contact data. This workflow builds a live, personalized lead engine that scrapes new LinkedIn job posts, finds company decision-maker emails, and generates custom cold emails using GPT — all fully automated through n8n. Benefits Automated daily scraping of “Marketing Manager” jobs in Belgium Real-time leads from companies currently hiring for marketing roles Filters out HR and staffing agencies to keep only real businesses Enriches each company with verified CEO, Sales, and Marketing emails Generates unique, human-like cold emails and subject lines with GPT-4o Saves clean data to Google Sheets and drafts personalized Gmail messages How It Works Schedule Trigger runs every morning at 08:00. Apify LinkedIn Scraper collects new “Marketing Manager” jobs in Belgium. Remove Duplicates ensures each company appears only once. Filter Staffing excludes recruiters, HR agencies, and interim firms. Save Useful Infos extracts core company data — name, domain, size, description. Filter Domain & Size keeps valid websites and companies under 100 employees. Anymailfinder API looks up CEO, Sales, and Marketing decision-maker emails. Merge + If Node validates email results and removes invalid entries. Split Out + Deduplicate ensures unique, verified contacts. Extract Lead Name (Code Node) separates first and last names. Google Sheets Node appends all enriched lead data to your master sheet. GPT-4o (LangChain) writes a 100–120 word personalized cold email. GPT-4o (LangChain) creates a short, casual subject line. Gmail Draft Node builds a ready-to-send email using both outputs. Wait Node loops until all leads are processed. Who Is This For B2B agencies targeting Belgian SMEs Outbound marketers using job postings as purchase intent signals Freelancers or founders running lean, automated outreach systems Growth teams building scalable cold email engines Setup Apify**: use curious_coder~linkedin-jobs-scraper actor + API token Anymailfinder**: header auth with decision-maker categories (ceo, sales, marketing) Google Sheets**: connect a sheet named “LinkedIn Job Scraper” and map columns OpenAI (GPT-4o)**: insert your API key into both LangChain nodes Gmail**: OAuth2 connection; resource set to draft n8n**: store all credentials securely; set HTTP nodes to continue on error ROI & Results Save 1–3 hours per day on manual research and outreach prep Contact active hiring companies when they need marketing help most Scale to multiple industries or regions by changing search URLs Outperform paid lead databases with fresh, verified data Strategy Insights Add funding or tech-stack data for better lead scoring A/B test GPT subject lines and log open rates in Sheets Schedule GPT follow-ups 3 and 7 days later for full automation Push all enriched data to your CRM for advanced segmentation Use hiring signals to trigger ad audiences or retargeting campaigns Check Out My Channel For more advanced automation workflows that generate real client results, check out my YouTube channel — where I share the exact systems I use to automate outreach, scale agency pipelines, and close deals faster.
by Li CHEN
AWS News Analysis and LinkedIn Automation Pipeline Transform AWS industry news into engaging LinkedIn content with AI-powered analysis and automated approval workflows. Who's it for This template is perfect for: Cloud architects and DevOps engineers** who want to stay current with AWS developments Content creators** looking to automate their AWS news coverage Marketing teams** needing consistent, professional AWS content Technical leaders** who want to share industry insights on LinkedIn AWS consultants** building thought leadership through automated content How it works This workflow creates a comprehensive AWS news analysis and content generation pipeline with two main flows: Flow 1: News Collection and Analysis Scheduled RSS Monitoring: Automatically fetches latest AWS news from the official AWS RSS feed daily at 8 PM AI-Powered Analysis: Uses AWS Bedrock (Claude 3 Sonnet) to analyze each news item, extracting: Professional summary Key themes and keywords Importance rating (Low/Medium/High) Business impact assessment Structured Data Storage: Saves analyzed news to Feishu Bitable with approval status tracking Flow 2: LinkedIn Content Generation Manual Approval Trigger: Feishu automation sends approved news items to the webhook AI Content Creation: AWS Bedrock generates professional LinkedIn posts with: Attention-grabbing headlines Technical insights from a Solutions Architect perspective Business impact analysis Call-to-action engagement Automated Publishing: Posts directly to LinkedIn with relevant hashtags How to set up Prerequisites AWS Bedrock access** with Claude 3 Sonnet model enabled Feishu account** with Bitable access LinkedIn company account** with posting permissions n8n instance** (self-hosted or cloud) Detailed Configuration Steps 1. AWS Bedrock Setup Step 1: Enable Claude 3 Sonnet Model Log into your AWS Console Navigate to AWS Bedrock Go to Model access in the left sidebar Find Anthropic Claude 3 Sonnet and click Request model access Fill out the access request form (usually approved within minutes) Once approved, verify the model appears in your Model access list Step 2: Create IAM User and Credentials Go to IAM Console Click Users → Create user Name: n8n-bedrock-user Attach policy: AmazonBedrockFullAccess (or create custom policy with minimal permissions) Go to Security credentials tab → Create access key Choose Application running outside AWS Download the credentials CSV file Step 3: Configure in n8n In n8n, go to Credentials → Add credential Select AWS credential type Enter your Access Key ID and Secret Access Key Set Region to your preferred AWS region (e.g., us-east-1) Test the connection Useful Links: AWS Bedrock Documentation Claude 3 Sonnet Model Access AWS Bedrock Pricing 2. Feishu Bitable Configuration Step 1: Create Feishu Account and App Sign up at Feishu International Create a new Bitable (multi-dimensional table) Go to Developer Console → Create App Enable Bitable permissions in your app Generate App Token and App Secret Step 2: Create Bitable Structure Create a new Bitable with these columns: title (Text) pubDate (Date) summary (Long Text) keywords (Multi-select) rating (Single Select: Low, Medium, High) link (URL) approval_status (Single Select: Pending, Approved, Rejected) Get your App Token and Table ID: App Token: Found in app settings Table ID: Found in the Bitable URL (tbl...) Step 3: Set Up Automation In your Bitable, go to Automation → Create automation Trigger: When field value changes → Select approval_status field Condition: approval_status equals "Approved" Action: Send HTTP request Method: POST URL: Your n8n webhook URL (from Flow 2) Headers: Content-Type: application/json Body: {{record}} Step 4: Configure Feishu Credentials in n8n Install Feishu Lite community node (self-hosted only) Add Feishu credential with your App Token and App Secret Test the connection Useful Links: Feishu Developer Documentation Bitable API Reference Feishu Automation Guide 3. LinkedIn Company Account Setup Step 1: Create LinkedIn App Go to LinkedIn Developer Portal Click Create App Fill in app details: App name: AWS News Automation LinkedIn Page: Select your company page App logo: Upload your logo Legal agreement: Accept terms Step 2: Configure OAuth2 Settings In your app, go to Auth tab Add redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Request these scopes: w_member_social (Post on behalf of members) r_liteprofile (Read basic profile) r_emailaddress (Read email address) Step 3: Get Company Page Access Go to your LinkedIn Company Page Navigate to Admin tools → Manage admins Ensure you have Content admin or Super admin role Note your Company Page ID (found in page URL) Step 4: Configure LinkedIn Credentials in n8n Add LinkedIn OAuth2 credential Enter your Client ID and Client Secret Complete OAuth2 flow by clicking Connect my account Select your company page for posting Useful Links: LinkedIn Developer Portal LinkedIn API Documentation LinkedIn OAuth2 Guide 4. Workflow Activation Final Setup Steps: Import the workflow JSON into n8n Configure all credential connections: AWS Bedrock credentials Feishu credentials LinkedIn OAuth2 credentials Update webhook URL in Feishu automation to match your n8n instance Activate the scheduled trigger (daily at 8 PM) Test with manual webhook trigger using sample data Verify Feishu Bitable receives data Test approval workflow and LinkedIn posting Requirements Service Requirements AWS Bedrock** with Claude 3 Sonnet model access AWS account with Bedrock service enabled IAM user with Bedrock permissions Model access approval for Claude 3 Sonnet Feishu Bitable** for news storage and approval workflow Feishu account (International or Lark) Developer app with Bitable permissions Automation capabilities for webhook triggers LinkedIn Company Account** for automated posting LinkedIn company page with admin access LinkedIn Developer app with posting permissions OAuth2 authentication setup n8n community nodes**: Feishu Lite node (self-hosted only) Technical Requirements n8n instance** (self-hosted recommended for community nodes) Webhook endpoint** accessible from Feishu automation Internet connectivity** for API calls and RSS feeds Storage space** for workflow execution logs Cost Considerations AWS Bedrock**: ~$0.01-0.05 per news analysis Feishu**: Free tier available, paid plans for advanced features LinkedIn**: Free API access with rate limits n8n**: Self-hosted (free) or cloud subscription How to customize the workflow Content Customization Modify AI prompts** in the AI Agent nodes to change tone, focus, or target audience Adjust hashtags** in the LinkedIn posting node for different industries Change scheduling** frequency by modifying the Schedule Trigger settings Integration Options Replace LinkedIn** with Twitter/X, Facebook, or other social platforms Add Slack notifications** for approved content before posting Integrate with CRM** systems to track content performance Add content calendar** integration for better planning Advanced Features Multi-language support** by modifying AI prompts for different regions Content categorization** by adding tags for different AWS services Performance tracking** by integrating analytics platforms Team collaboration** by adding approval workflows with multiple reviewers Technical Modifications Change RSS sources** to monitor other AWS blogs or competitor news Adjust AI models** to use different Bedrock models or external APIs Add data validation** nodes for better error handling Implement retry logic** for failed API calls Important Notes Service Limitations This template uses community nodes (Feishu Lite) and requires self-hosted n8n Geo-restrictions** may apply to AWS Bedrock models in certain regions Rate limits** may affect high-frequency posting - adjust scheduling accordingly Content moderation** is recommended before automated posting Cost considerations**: Each AI analysis costs approximately $0.01-0.05 USD per news item Troubleshooting Common Issues AWS Bedrock Issues: Model not found**: Ensure Claude 3 Sonnet access is approved in your region Access denied**: Verify IAM permissions include Bedrock service access Rate limiting**: Implement retry logic or reduce analysis frequency Feishu Integration Issues: Authentication failed**: Check App Token and App Secret are correct Table not found**: Verify Table ID matches your Bitable URL Automation not triggering**: Ensure webhook URL is accessible and returns 200 status LinkedIn Posting Issues: OAuth2 errors**: Re-authenticate LinkedIn credentials Posting failed**: Verify company page admin permissions Rate limits**: LinkedIn has daily posting limits for company pages Security Best Practices Never hardcode credentials** in workflow nodes Use environment variables** for sensitive configuration Regularly rotate API keys** and access tokens Monitor API usage** to prevent unexpected charges Implement error handling** for failed API calls