by Avkash Kakdiya
How it works This workflow enriches and personalizes your lead profiles by integrating HubSpot contact data, scraping social media information, and using AI to generate tailored outreach emails. It streamlines the process from contact capture to sending a personalized email β all automatically. The system fetches new or updated HubSpot contacts, verifies and enriches their Twitter/LinkedIn data via Phantombuster, merges the profile and engagement insights, and finally generates a customized email ready for outreach. Step-by-step 1. Trigger & Input HubSpot Contact Webhook: Fires when a contact is created or updated in HubSpot. Fetch Contact: Pulls the full contact details (email, name, company, and social profiles). Update Google Sheet: Logs Twitter/LinkedIn usernames and marks their tracking status. 2. Validation Validate Twitter/LinkedIn Exists: Checks if the contact has a valid social profile before proceeding to scraping. 3. Social Media Scraping (via Phantombuster) Launch Profile Scraper & π― Launch Tweet Scraper: Triggers Phantombuster agents to fetch profile details and recent tweets. Wait Nodes: Ensures scraping completes (30β60 seconds). Fetch Profile/Tweet Results: Retrieves output files from Phantombuster. Extract URL: Parses the job output to extract the downloadable .json or .csv data file link. 4. Data Download & Parsing Download Profile/Tweet Data: Downloads scraped JSON files. Parse JSON: Converts the raw file into structured data for processing. 5. Data Structuring & Merging Format Profile Fields: Maps stats like bio, followers, verified status, likes, etc. Format Tweet Fields: Captures tweet data and associates it with the leadβs email. Merge Data Streams: Combines tweet and profile datasets. Combine All Data: Produces a single, clean object containing all relevant lead details. 6. AI Email Generation & Delivery Generate Personalized Email: Feeds the merged data into OpenAI GPT (via LangChain) to craft a custom HTML email using your brand details. Parse Email Content: Cleans AI output into structured subject and body fields. Sends Email: Automatically delivers the personalized email to the lead via Gmail. Benefits Automated Lead Enrichment β Combines CRM and real-time social media data with zero manual research. Personalized Outreach at Scale β AI crafts unique, relevant emails for each contact. Improved Engagement Rates β Targeted messages based on actual social activity and profile details. Seamless Integration β Works directly with HubSpot, Google Sheets, Gmail, and Phantombuster. Time & Effort Savings β Replaces hours of manual lookup and email drafting with an end-to-end automated flow.
by Pawan
This template sets up a scheduled automation that scrapes the latest news from The Hindu website, uses a Google Gemini AI Agent to filter and analyze the content for relevance to the Competitive Exams like UPSC Civil Services Examination (CSE) syllabus, and compiles a structured daily digest directly into a Google Sheet. It saves hours of manual reading and note-taking by providing concise summaries, subject categorization, and explicit UPSC importance notes. Whoβs it for This workflow is essential for: UPSC/CSE Aspirants who require a curated, focused, and systematic daily current affairs digest. Coaching Institutes aiming to instantly generate structured, high-quality study material for their students. Educators and Content Creators focused on Governance, Economy, International Relations, and Science & Technology. How it works / What it does This workflow runs automatically every morning (scheduled for 7 AM by default) to generate a ready-to-study current affairs document. Scraping: The Schedule Trigger fires an HTTP Request to fetch the latest news links from The Hindu's front page. Data Curation: The HTML and Code in JavaScript nodes work together to extract and pair every article URL with its title. Content Retrieval: For each identified link, a second HTTP Request node fetches the entire article body. AI Analysis and Filtering: The AI Agent uses a detailed prompt and the Google Gemini Chat Model to perform two critical tasks: Filter: It filters out all irrelevant articles (e.g., sports results, local crime) to keep only the 5-6 most important UPSC-relevant pieces (Polity, Economy, IR, etc.). Analyze: For the selected articles, it generates a Brief Summary, identifies the Main Subject, and clearly articulates Why it is Important for the UPSC Exam. Storage: The AI Agent calls the integrated Google Sheets Tool to automatically append the structured, analyzed data into your designated Google Sheet, creating your daily ready-made notes. Requirements To deploy this workflow, you need: n8n Account: (Cloud or self-hosted). Google Gemini API Key: For connecting the Google Gemini Chat Model and powering the AI Agent. Google Sheets Credentials: For reading/writing the final compiled digest. Target Google Sheet: A spreadsheet with the following columns: Date, URL, Subject, Brief Summary, and What is Important. How to set up Credentials Setup:** Connect your Google Gemini and Google Sheets accounts via the n8n Credentials Manager. Google Sheet Linking:* In the *Append row in sheet and Append row in sheet in Google Sheets1 nodes, replace the **placeholder IDs and GIDs with the actual ID and sheet name of your dedicated UPSC notes spreadsheet. Scheduling:* Adjust the time in the *Schedule Trigger: Daily at 7 AM node** if you want the daily analysis to run at a different hour. AI Customization (Optional):* You can refine the System Message in the *AI Agent: Filter & Analyze UPSC News node** to focus the analysis on specific exam phases (e.g., Prelims only) or adjust the priority of subjects.
by phil
This workflow is designed for B2B professionals to automatically identify and summarize business opportunities from a company's website. By leveraging Bright Data's Web Unblocker and advanced AI models from OpenRouter, it scrapes relevant company pages ("About Us", "Team", "Contact"), analyzes the content for potential pain points and needs, and synthesizes a concise, actionable report. The final output is formatted for direct use in documents, making it an ideal tool for sales, marketing, and business development teams to prepare for prospecting calls or personalize outreach. Who's it for This template is ideal for: B2B Sales Teams:** Quickly find and qualify leads by identifying specific business needs before a cold call. Marketing Agencies:** Develop personalized content and value propositions based on a prospect's public website information. Business Development Professionals:** Efficiently research potential partners or clients and discover collaboration opportunities. Entrepreneurs:** Gain a competitive edge by understanding a competitor's strategy or a potential client's operations. How it works The workflow is triggered by a chat message, typically a URL from an n8n chat application. It uses Bright Data to scrape the website's sitemap and extract all anchor links from the homepage. An AI agent analyzes the extracted URLs to filter for pages relevant to company information (e.g., "about-us," "team," "contact"). The workflow then scrapes the content of these specific pages. A second AI agent summarizes the content of each page, looking for business opportunities related to AI-powered automation. The summaries are merged and a final AI agent synthesizes them into a single, cohesive report, formatted for easy reading in a Google Doc. How to set up Bright Data Credentials: Sign up for a Bright Data account and create a Web Unblocker zone. In n8n, create new Bright Data API credentials and copy your API key. OpenRouter Credentials: Create an account on OpenRouter and get your API key. In n8n, create new OpenRouter API credentials and paste your key. Chat Trigger Node: Configure the "When chat message received" node. Copy the production webhook URL to integrate with your preferred chat platform. Requirements An active n8n instance. A Bright Data account with a Web Unblocker zone. An OpenRouter account with API access. How to customize this workflow AI Prompting:** Edit the "systemMessage" parameters in the "AI Agent", "AI Agent1", and "AI Agent2" nodes to change the focus of the opportunity analysis. For example, modify the prompts to search for specific technologies, industry jargon, or different types of business challenges. Model Selection:** The workflow uses openai/o4-mini and openai/gpt-5. You can change these to other models available on OpenRouter by editing the model parameter in the OpenRouter Chat Model nodes. Scraping Logic:** The extract url node uses a regular expression to find `` tags. This can be modified or replaced with an HTML Extraction node to target different elements or content on a website. Output Format:** The final output is designed for Google Docs. You can modify the last "AI Agent2" node's prompt to generate the output in a different format, such as a simple JSON object or a markdown list. Phil | Inforeole π«π· Contactez nous pour automatiser vos processus
by Bhuvanesh R
Your Cold Email is Now Researched. This pipeline finds specific bottlenecks on prospect websites and instantly crafts an irresistible pitch π― Problem Statement Traditional high-volume cold email outreach is stuck on generic personalization (e.g., "Love your website!"). Sales teams, especially those selling high-value AI Receptionists, struggle to efficiently find the one Unique Operational Hook (like manual scheduling dependency or high call volume) needed to make the pitch relevant. This forces reliance on expensive, slow manual research, leading to low reply rates and inefficient spending on bulk outreach tools. β¨ Solution This workflow deploys a resilient Dual-AI Personalization Pipeline that runs on a batch basis. It uses the Filter (Qualified Leads) node as a cost-saving Quality Gate to prevent processing bad leads. It executes a Targeted Deep Dive on successful leads, using GPT-4 for analytical insight extraction and Claude Sonnet for coherent, human-like copy generation. The entire process outputs campaign-ready data directly to Google Sheets and sends a critical QA Draft via Gmail. βοΈ How It Works (Multi-Step Execution) 1\. Ingestion and Cost Control (The Quality Gate) Trigger and Ingestion:* The workflow starts via a *Manual Trigger, pulling leads directly from **Get All Leads (Google Sheets). Cost Filtering:* The *Filter (Qualified Leads)** node removes leads that lack a working email or website URL. Execution Isolation:* The *Loop Over Leads* node initiates individual processing. The *Capture Lead Data (Set)** node immediately captures and locks down the original lead context for stability throughout the loop. Hybrid Scraping:* The *Scrape Site (HTTP Request)* and *Extract Text & Links (HTML)* nodes execute the *Hybrid Scraping* strategy, simultaneously capturing *website text* and *external links**. Data Shaping & Status:* The *Filter Social & Status (Code)* node is the control center. It filters links, bundles the context, and critically, assigns a *status** of 'Success' or 'Scrape Fail'. Cost Control Branch:* The *If (IF node)* checks this status. Items with 'Scrape Fail' bypass all AI steps (saving *100% of AI token costs) and jump directly to **Log Final Result. Successful items proceed to the AI core. 2\. Dual-AI Coherence & Dispatch (The Executive Output) Analytical Synthesis:* The *Summarize Website (OpenAI)* node uses *GPT-4* to synthesize the full context and extract the *Unique Operational Hook** (e.g., manual booking overhead). Coherent Copy Generation:* The *Generate Subject & Body (Anthropic)* node uses the *Claude Sonnet* model to generate the subject and the multi-line body, guaranteeing *coherence** by creating both simultaneously in a single JSON output. Final Parsing:* The *Parse AI Output (Code)* node reliably strips markdown wrappers and extracts the clean *subject* and *body** strings. Final Delivery:* The data is logged via *Log Final Result (Google Sheets), and the completed email is sent to the user via **Create a draft (Gmail) for final Quality Assurance before sending. π οΈ Setup Steps Before running the workflow, ensure these credentials and data structures are correctly configured: Credentials Anthropic:** Configure credentials for the Language Model (Claude Sonnet). OpenAI:** Configure credentials for the Analytical Model (GPT-4/GPT-4o). Google Services:* Set up OAuth2 credentials for *Google Sheets* (Input/Output) and *Gmail** (Draft QA and Completion Alert). Configuration Google Sheet Setup:* Your input sheet must include the columns *email, **website\_url, and an empty Icebreaker column for initial filtering. HTTP URL:* Verify that the *Scrape Site** node's URL parameter is set to pull the website URL from the stabilized data structure: ={{ $json.website\_url }}. AI Prompts:** Ensure the Anthropic prompt contains your current Irresistible Sales Offer and the required nested JSON output structure. β Benefits Coherence Guarantee:* A single *Anthropic** node generates both the subject and body, guaranteeing the message is perfectly aligned and hits the same unique insight. Maximum Cost Control:* The *IF node* prevents spending tokens on bad or broken websites, making the campaign highly *budget-efficient**. Deep Personalization:* Combines *website text* and *social media links**, creating an icebreaker that implies thorough, manual research. High Reliability:* Uses robust *Code nodes** for data structuring and parsing, ensuring the workflow runs consistently under real-world conditions without crashing. Zero-Risk QA:* The final *Gmail (Create a draft)** step ensures human review of the generated copy before any cold emails are sent out.
by Mariela Slavenova
This template crawls a website from its sitemap, deduplicates URLs in Supabase, scrapes pages with Crawl4AI, cleans and validates the text, then stores content + metadata in a Supabase vector store using OpenAI embeddings. Itβs a reliable, repeatable pipeline for building searchable knowledge bases, SEO research corpora, and RAG datasets. βΈ» Good to know β’ Built-in de-duplication via a scrape_queue table (status: pending/completed/error). β’ Resilient flow: waits, retries, and marks failed tasks. β’ Costs depend on Crawl4AI usage and OpenAI embeddings. β’ Replace any placeholders (API keys, tokens, URLs) before running. β’ Respect website robots/ToS and applicable data laws when scraping. How it works Sitemap fetch & parse β Load sitemap.xml, extract all URLs. De-dupe β Normalize URLs, check Supabase scrape_queue; insert only new ones. Scrape β Send URLs to Crawl4AI; poll task status until completed. Clean & score β Remove boilerplate/markup, detect content type, compute quality metrics, extract metadata (title, domain, language, length). Chunk & embed β Split text, create OpenAI embeddings. Store β Upsert into Supabase vector store (documents) with metadata; update job status. Requirements β’ Supabase (Postgres + Vector extension enabled) β’ Crawl4AI API key (or header auth) β’ OpenAI API key (for embeddings) β’ n8n credentials set for HTTP, Postgres/Supabase How to use Configure credentials (Supabase/Postgres, Crawl4AI, OpenAI). (Optional) Run the provided SQL to create scrape_queue and documents. Set your sitemap URL in the HTTP Request node. Execute the workflow (manual trigger) and monitor Supabase statuses. Query your documents table or vector store from your app/RAG stack. Potential Use Cases This automation is ideal for: Market research teams collecting competitive data Content creators monitoring web trends SEO specialists tracking website content updates Analysts gathering structured data for insights Anyone needing reliable, structured web content for analysis Need help customizing? Contact me for consulting and support: LinkedIn
by Growth AI
SEO Content Generation Workflow - n8n Template Instructions Who's it for This workflow is designed for SEO professionals, content marketers, digital agencies, and businesses who need to generate optimized meta tags, H1 headings, and content briefs at scale. Perfect for teams managing multiple clients or large keyword lists who want to automate competitor analysis and SEO content creation while maintaining quality and personalization. How it works The workflow automates the entire SEO content creation process by analyzing your target keywords against top competitors, then generating optimized meta elements and comprehensive content briefs. It uses AI-powered analysis combined with real competitor data to create SEO-friendly content that's tailored to your specific business context. The system processes keywords in batches, performs Google searches, scrapes competitor content, analyzes heading structures, and generates personalized SEO content using your company's database information for maximum relevance. Requirements Required Services and Credentials Google Sheets API**: For reading configuration and updating results Anthropic API**: For AI content generation (Claude Sonnet 4) OpenAI API**: For embeddings and vector search Apify API**: For Google search results Firecrawl API**: For competitor website scraping Supabase**: For vector database (optional but recommended) Template Spreadsheet Copy this template spreadsheet and configure it with your information: Template Link How to set up Step 1: Copy and Configure Template Make a copy of the template spreadsheet Fill in the Client Information sheet: Client name: Your company or client's name Client information: Brief business description URL: Website address Supabase database: Database name (prevents AI hallucination) Tone of voice: Content style preferences Restrictive instructions: Topics or approaches to avoid Complete the SEO sheet with your target pages: Page: Page you're optimizing (e.g., "Homepage", "Product Page") Keyword: Main search term to target Awareness level: User familiarity with your business Page type: Category (homepage, blog, product page, etc.) Step 2: Import Workflow Import the n8n workflow JSON file Configure all required API credentials in n8n: Google Sheets OAuth2 Anthropic API key OpenAI API key Apify API key Firecrawl API key Supabase credentials (if using vector database) Step 3: Test Configuration Activate the workflow Send your Google Sheets URL to the chat trigger Verify that all sheets are readable and credentials work Test with a single keyword row first Workflow Process Overview Phase 0: Setup and Configuration Copy template spreadsheet Configure client information and SEO parameters Set up API credentials in n8n Phase 1: Data Input and Processing Chat trigger receives Google Sheets URL System reads client configuration and SEO data Filters valid keywords and empty H1 fields Initiates batch processing Phase 2: Competitor Research and Analysis Searches Google for top 10 results per keyword Scrapes first 5 competitor websites Extracts heading structures (H1-H6) Analyzes competitor meta tags and content organization Phase 3: Meta Tags and H1 Generation AI analyzes keyword context and competitor data Accesses client database for personalization Generates optimized meta title (65 chars max) Creates compelling meta description (165 chars max) Produces user-focused H1 (70 chars max) Phase 4: Content Brief Creation Analyzes search intent percentages Develops content strategy based on competitor analysis Creates detailed MECE page structure Suggests rich media elements Provides writing recommendations and detail level scoring Phase 5: Data Integration and Updates Combines all generated content into unified structure Updates Google Sheets with new SEO elements Preserves existing data while adding new content Continues batch processing for remaining keywords How to customize the workflow Adjusting AI Models Replace Anthropic Claude with other LLM providers Modify system prompts for different content styles Adjust character limits for meta elements Modifying Competitor Analysis Change number of competitors analyzed (currently 5) Adjust scraping parameters in Firecrawl nodes Modify heading extraction logic in JavaScript nodes Customizing Output Format Update Google Sheets column mapping in Code node Modify structured output parser schema Change batch processing size in Split in Batches node Adding Quality Controls Insert validation nodes between phases Add error handling and retry logic Implement content quality scoring Extending Functionality Add keyword research capabilities Include image optimization suggestions Integrate social media content generation Connect to CMS platforms for direct publishing Best Practices Test with small batches before processing large keyword lists Monitor API usage and costs across all services Regularly update system prompts based on output quality Maintain clean data in your Google Sheets template Use descriptive node names for easier workflow maintenance Troubleshooting API Errors**: Check credential configuration and usage limits Scraping Failures**: Firecrawl nodes have error handling enabled Empty Results**: Verify keyword formatting and competitor availability Sheet Updates**: Ensure proper column mapping in final Code node Processing Stops**: Check batch processing limits and timeout settings
by Growth AI
SEO Content Generation Workflow (Basic Version) - n8n Template Instructions Who's it for This workflow is designed for SEO professionals, content marketers, digital agencies, and businesses who need to generate optimized meta tags, H1 headings, and content briefs at scale. Perfect for teams managing multiple clients or large keyword lists who want to automate competitor analysis and SEO content creation without the complexity of vector databases. How it works The workflow automates the entire SEO content creation process by analyzing your target keywords against top competitors, then generating optimized meta elements and comprehensive content briefs. It uses AI-powered analysis combined with real competitor data to create SEO-friendly content that's tailored to your specific business context. The system processes keywords in batches, performs Google searches, scrapes competitor content, analyzes heading structures, and generates personalized SEO content using your company information for maximum relevance. Requirements Required Services and Credentials Google Sheets API**: For reading configuration and updating results Anthropic API**: For AI content generation (Claude Sonnet 4) Apify API**: For Google search results Firecrawl API**: For competitor website scraping Template Spreadsheet Copy this template spreadsheet and configure it with your information: Template Link How to set up Step 1: Copy and Configure Template Make a copy of the template spreadsheet Fill in the Client Information sheet: Client name: Your company or client's name Client information: Brief business description URL: Website address Tone of voice: Content style preferences Restrictive instructions: Topics or approaches to avoid Complete the SEO sheet with your target pages: Page: Page you're optimizing (e.g., "Homepage", "Product Page") Keyword: Main search term to target Awareness level: User familiarity with your business Page type: Category (homepage, blog, product page, etc.) Step 2: Import Workflow Import the n8n workflow JSON file Configure all required API credentials in n8n: Google Sheets OAuth2 Anthropic API key Apify API key Firecrawl API key Step 3: Test Configuration Activate the workflow Send your Google Sheets URL to the chat trigger Verify that all sheets are readable and credentials work Test with a single keyword row first Workflow Process Overview Phase 0: Setup and Configuration Copy template spreadsheet Configure client information and SEO parameters Set up API credentials in n8n Phase 1: Data Input and Processing Chat trigger receives Google Sheets URL System reads client configuration and SEO data Filters valid keywords and empty H1 fields Initiates batch processing Phase 2: Competitor Research and Analysis Searches Google for top 10 results per keyword using Apify Scrapes first 5 competitor websites using Firecrawl Extracts heading structures (H1-H6) from competitor pages Analyzes competitor meta tags and content organization Processes markdown content to identify heading hierarchies Phase 3: Meta Tags and H1 Generation AI analyzes keyword context and competitor data using Claude Incorporates client information for personalization Generates optimized meta title (65 characters maximum) Creates compelling meta description (165 characters maximum) Produces user-focused H1 (70 characters maximum) Uses structured output parsing for consistent formatting Phase 4: Content Brief Creation Analyzes search intent percentages (informational, transactional, navigational) Develops content strategy based on competitor analysis Creates detailed MECE page structure with H2 and H3 sections Suggests rich media elements (images, videos, infographics, tables) Provides writing recommendations and detail level scoring (1-10 scale) Ensures SEO optimization while maintaining user relevance Phase 5: Data Integration and Updates Combines all generated content into unified structure Updates Google Sheets with new SEO elements Preserves existing data while adding new content Continues batch processing for remaining keywords Key Differences from Advanced Version This basic version focuses on core SEO functionality without additional complexity: No Vector Database**: Removes Supabase integration for simpler setup Streamlined Architecture**: Fewer dependencies and configuration steps Essential Features Only**: Core competitor analysis and content generation Faster Setup**: Reduced time to deployment Lower Costs**: Fewer API services required How to customize the workflow Adjusting AI Models Replace Anthropic Claude with other LLM providers in the agent nodes Modify system prompts for different content styles or languages Adjust character limits for meta elements in the structured output parser Modifying Competitor Analysis Change number of competitors analyzed (currently 5) by adding/removing Scrape nodes Adjust scraping parameters in Firecrawl nodes for different content types Modify heading extraction logic in JavaScript Code nodes Customizing Output Format Update Google Sheets column mapping in the final Code node Modify structured output parser schema for different data structures Change batch processing size in Split in Batches node Adding Quality Controls Insert validation nodes between workflow phases Add error handling and retry logic to critical nodes Implement content quality scoring mechanisms Extending Functionality Add keyword research capabilities with additional APIs Include image optimization suggestions Integrate social media content generation Connect to CMS platforms for direct publishing Best Practices Setup and Testing Always test with small batches before processing large keyword lists Monitor API usage and costs across all services Regularly update system prompts based on output quality Maintain clean data in your Google Sheets template Content Quality Review generated content before publishing Customize system prompts to match your brand voice Use descriptive node names for easier workflow maintenance Keep competitor analysis current by running regularly Performance Optimization Process keywords in small batches to avoid timeouts Set appropriate retry policies for external API calls Monitor workflow execution times and optimize bottlenecks Troubleshooting Common Issues and Solutions API Errors Check credential configuration in n8n settings Verify API usage limits and billing status Ensure proper authentication for each service Scraping Failures Firecrawl nodes have error handling enabled to continue on failures Some websites may block scraping - this is normal behavior Check if competitor URLs are accessible and valid Empty Results Verify keyword formatting in Google Sheets Ensure competitor websites contain the expected content structure Check if meta tags are properly formatted in system prompts Sheet Update Errors Ensure proper column mapping in final Code node Verify Google Sheets permissions and sharing settings Check that target sheet names match exactly Processing Stops Review batch processing limits and timeout settings Check for errors in individual nodes using execution logs Verify all required fields are populated in input data Template Structure Required Sheets Client Information: Business details and configuration SEO: Target keywords and page information Results Sheet: Where generated content will be written Expected Columns Keywords**: Target search terms Description**: Brief page description Type de page**: Page category Awareness level**: User familiarity level title, meta-desc, h1, brief**: Generated output columns This streamlined version provides all essential SEO content generation capabilities while being easier to set up and maintain than the advanced version with vector database integration.
by Rahul Joshi
Description Process new resumes from Google Drive, extract structured candidate data with AI, save to Google Sheets, and auto-create a ClickUp hiring task. Gain a centralized, searchable candidate database and instant task kickoffβno manual data entry. π What This Template Does Watches a Google Drive folder for new resume PDFs and triggers the workflow. π Downloads the file and converts the PDF to clean, readable text. π Analyzes resume text with an AI Resume Analyzer to extract structured candidate info (name, email, phone, experience, skills, education). π€ Cleans and validates the AI JSON output for reliability. π§Ή Appends or updates a candidate row in Google Sheets and creates a ClickUp hiring task. β Key Benefits Save hours with end-to-end, hands-off resume processing. β±οΈ Never miss a candidateβevery upload triggers automatically. π Keep a single source of truth in Sheets, always up-to-date. π Kickstart hiring instantly with auto-created ClickUp tasks. π Works with varied resume formats using AI extraction. π§ Features Google Drive βWatch for New Resumesβ trigger (every minute). β² PDF-to-text extraction optimized for text-based PDFs. π AI-powered resume parsing into standardized JSON fields. π§© JSON cleanup and validation for safe storage. π§° Google Sheets append-or-update for a central candidate database. π ClickUp task creation with candidate-specific titles and assignment. π― Requirements n8n instance (cloud or self-hosted); recommended n8n version 1.106.3 or higher. π§ Google Drive access to a dedicated resumes folder (PDF resumes recommended). π Google Sheets credential with edit access to the candidate database sheet. π ClickUp workspace/project access to create tasks for hiring. π AI service credentials for the Resume Analyzer step (add in n8n Credentials). π€ Target Audience HR and Talent Acquisition teams needing faster screening. π₯ Recruiters and staffing agencies handling high volumes. π’ Startups and ops teams standardizing candidate intake. π No-code/low-code builders automating hiring workflows. π§© Step-by-Step Setup Instructions Connect Google Drive, Google Sheets, ClickUp, and your AI service in n8n Credentials. π Set the Google Drive βwatchedβ folder (e.g., Resume_store). π Import the workflow, assign credentials to all nodes, and map your Sheets columns. ποΈ Adjust the ClickUp task details (title pattern, assignee, list). π Run once with a sample PDF to test, then enable scheduling (every 1 minute). βΆοΈ Optionally rename the email/task nodes for clarity (e.g., βCreate Hiring Task in ClickUpβ). βοΈ
by usamaahmed
π HR Resume Screening Workflow β Smart Hiring on Autopilot π€ π― Overview: "This workflow builds an AI-powered resume screening system inside n8n. It begins with Gmail and Form triggers that capture incoming resumes, then uploads each file to Google Drive for storage. The resume is downloaded and converted into plain text, where two branches run in parallel: one extracts structured contact details, and the other uses an AI agent to summarize education, job history, and skills while assigning a suitability score. A cleanup step normalizes the data before merging both outputs, and the final candidate record is saved into Google Sheets and Airtable, giving recruiters a centralized dashboard to identify top talent quickly and consistently.β π Prerequisites: To run this workflow successfully, youβll need: Gmail OAuth** β to read incoming resumes. Google Drive OAuth** β to upload and download resume files. Google Sheets OAuth** β to save structured candidate records. Airtable Personal Access Token** β for dashboards and record-keeping. OpenAI / OpenRouter API Key** β to run the AI summarizer and evaluator. βοΈ Setup Instructions: Import the Workflow Clone or import the workflow into your n8n instance. Add Credentials Go to n8n β Credentials and connect Gmail, Google Drive, Google Sheets, Airtable, and OpenRouter/OpenAI. Configure Key Nodes Gmail Trigger β Update filters.q with the job title you are hiring for (e.g., "Senior Software Engineer"). Google Drive Upload β Set the folderId where resumes will be stored. Google Sheets Node β Link to your HR spreadsheet (e.g., βCandidates 2025β). Airtable Node β Select the correct base & table schema for candidate records. Test the Workflow Send a test resume (via email or form). Check Google Sheets & Airtable for structured candidate data. Go Live Enable the workflow. It will now run continuously and process new resumes as they arrive. π End-to-End Workflow Walkthrough: π’ Section 1 β Entry & Intake Nodes: π§ Gmail Trigger β Polls inbox every minute, captures job application emails, and downloads resume attachments (CV0, CV1, β¦). π Form Trigger β Alternate entry for resumes submitted via a careers page or job portal. β Quick Understanding: Think of this section as the front desk of recruitment - resumes received either by email or online form, and the system immediately grabs them for processing. π Section 2 β File Management Nodes: βοΈ Upload File (Google Drive) β Saves the incoming resume into a structured Google Drive folder, naming it after the applicant. β¬οΈ Download File (Google Drive) β Retrieves the stored resume file for further processing. π Extract from File β Converts the resume (PDF/DOC) into plain text so the AI and extractors can work with it. β Quick Understanding: This is your digital filing room. Every resume is safely stored, then converted into a readable format for the hiring system. π€ Section 3 β AI Processing (Parallel Analysis) Nodes: π§Ύ Information Extractor β Pulls structured contact information (candidate name, candidate email and candidate phone number) using regex validation and schema rules. π€ AI Agent (LangChain + OpenRouter) β Reads the full CV and outputs: π Educational Qualifications πΌ Job History π Skills Set π Candidate Evaluation Score (1β10) π Justification for the score β Quick Understanding: Imagine having two assistants working in parallel, one quickly extracts basic contact info, while the other deeply reviews the CV and gives an evaluation. π οΈ Section 4 β Data Cleanup & Merging Nodes: βοΈ Edit Fields β Standardizes the AI Agentβs output into a consistent field (output). π Code (JS Parsing & Cleanup) β Converts the AIβs free-text summary into clean JSON fields (education, jobHistory, skills, score, justification). π Merge β Combines the structured contact info with the AIβs evaluation into a single candidate record. β Quick Understanding: This is like the data cleaning and reporting team, making sure all details are neat, structured, and merged into one complete candidate profile. π Section 5 β Persistence & Dashboards Nodes: π Google Sheets (Append Row) β Saves candidate details into a Google Sheet for quick team access. π Airtable (Create Record) β Stores the same structured data into Airtable, enabling dashboards, analytics, and ATS-like tracking. β Quick Understanding: Think of this as your HR dashboard and database. Every candidate record is logged in both Google Sheets and Airtable, ready for filtering, reporting, or further action. π Workflow Overview Table: | Section | Key Roles / Nodes | Model / Service | Purpose | Benefit | | --- | --- | --- | --- | --- | | π₯ Entry & Intake | Gmail Trigger, Form Trigger | Gmail API / Webhook | Capture resumes from email or forms | Resumes collected instantly from multiple sources | | π File Management | Google Drive Upload, Google Drive Download, Extract from File | Google Drive + n8n Extract | Store resumes & convert to plain text | Centralized storage + text extraction for processing | | π€ AI Processing | Information Extractor, AI Agent (LangChain + OpenRouter) | Regex + OpenRouter AI {gpt-oss-20b (free)} | Extract contact info + AI CV analysis | Candidate details + score + justification generated automatically | | π Data Cleanup & Merge | Edit Fields, Code (JS Parsing & Cleanup), Merge | n8n native + Regex Parsing | Standardize and merge outputs | Clean, structured JSON record with all candidate info | | π Persistence Layer | Google Sheets Append Row, Airtable Create Record | Google Sheets + Airtable APIs | Store structured candidate data | HR dashboards & ATS-ready records for easy review and analytics | | π Execution Flow | All connected | Gmail + Drive + Sheets + Airtable + AI | End-to-end automation | Automated resume β structured record β recruiter dashboards | π Workflow Output Overview: Each candidateβs data is standardized into the following fields: Candidate Name Candidate Email Contact Number Educational Qualifications Job History Skills Set AI Score (1β10) Justification π Example (Google Sheet row): π Benefits of This Workflow at a Glance: β±οΈ Lightning-Fast Screening** β Processes hundreds of resumes in minutes instead of hours. π€ AI-Powered Evaluation** β Automatically summarizes candidate education, work history, skills, and gives a suitability score (1β10) with justification. π Centralized Storage** β Every resume is securely saved in Google Drive for easy access and record-keeping. π Data-Ready Outputs** β Structured candidate profiles go straight into Google Sheets and Airtable, ready for dashboards and analytics. β Consistency & Fairness** β Standardized AI scoring ensures every candidate is evaluated on the same criteria, reducing human bias. π οΈ Flexible Intake** β Works with both Gmail (email applications) and Form submissions (job portals or career pages). π Recruiter Productivity Boost** β Frees HR teams from manual extraction and data entry, allowing them to focus on interviewing and hiring the best talent. π Practical HR Use Case: βScreen resumes for a Senior Software Engineer role and shortlist top candidates.β Gmail Trigger β Captures incoming job applications with CVs attached. Google Drive β Stores resumes for record-keeping. Extract from File β Converts CVs into plain text. Information Extractor β Pulls candidate name, email, and phone number. AI Agent β Summarizes education, job history, skills, and assigns a suitability score (1β10). Code & Merge β Cleans and combines outputs into a structured candidate profile. Google Sheets β Logs candidate data for quick HR review. Airtable β Builds dashboards to filter and identify top-scoring candidates. β Result: HR instantly sees structured candidate records, filters by score, and focuses interviews on the best talent.
by Artem Boiko
Estimate embodied carbon (CO2e) for grouped BIM/CAD elements. The workflow accepts an existing XLSX (grouped element data) or, if missing, can trigger a local RvtExporter.exe to generate one. It detects category fields, filters out non-building elements, infers aggregation rules with AI, computes CO2 using densities & emission factors, and exports a multi-sheet Excel plus a clean HTML report. What it does Reads or builds XLSX** (from your model via RvtExporter.exe when needed). Finds category/volumetric fields**; separates building vs. annotation elements. Uses AI to infer aggregation rules (sum/mean/first) per header. Groups** rows by your group_by field and aggregates totals. Prepares enhanced prompts and calls your LLM to classify materials and estimate CO2 (A1-A3 minimum). Computes project totals* and generates a *multi-sheet XLSX* + *HTML** report with charts and hotspots. Prerequisites LLM credentials** for one provider (e.g., OpenAI, Anthropic, Gemini, Grok/OpenRouter). Enable one chat node and connect credentials. Windows host* only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is *not required**. Optional: mapping/classifier files (XLSX/CSV/PDF) to improve material classification. How to use Import this JSON into n8n. Open the Setup/Parameters node(s) and set: project_file β path to your .rvt/.ifc or to an existing grouped *_rvt.xlsx path_to_converter β C:\\DDC_Converter_Revit\\datadrivenlibs\\RvtExporter.exe (optional) group_by β e.g., Type Name / Category / IfcType sheet_name β default Summary (if reading from XLSX) Enable one LLM node and attach credentials; keep others disabled. Execute (Manual Trigger). The workflow detects/builds the XLSX, analyzes, classifies, estimates CO2, then writes Excel and opens the HTML report. Outputs Excel** (CO2_Analysis_Report_YYYY-MM-DD.xlsx, ~8 sheets): Executive Summary, All Elements, Material Summary, Category Analysis, Impact Analysis, Top 20 Hotspots, Data Quality, Recommendations. HTML**: executive report with key KPIs and charts. Per-group fields include: Material (EU/DE/US), Quantity & Unit, Density, Mass, CO2 Factor, Total CO2 (kg/tonnes), CO2 %, Confidence, Assumptions. Notes & tips Input quantities (volumes/areas) are already aggregated per group β do not multiply by element count. Use -no-collada upstream if you only need XLSX in extraction. Prefer ASCII-safe paths and ensure write permissions to output folder. Categories Data Extraction Β· Files & Storage Β· ETL Β· CAD/BIM Β· Carbon/ESG Tags cad-bim, co2, carbon, embodied-carbon, lca, revit, ifc, xlsx, html-report, llm Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub
by Yaron Been
This workflow provides automated access to the Settyan Flash V2.0.0 Beta.0 AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Settyan Flash V2.0.0 Beta.0 model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Settyan/flash-v2.0.0-beta.0 AI model Settyan Flash V2.0.0 Beta.0**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
F Description This workflow automatically searches Airbnb for the best deals in your target locations and saves them for later reference. It helps travelers find affordable accommodations by continuously monitoring listings and identifying properties that match your budget and preferences. Overview This workflow automatically searches Airbnb for the best deals in your target locations and saves them for later reference. It uses Bright Data to scrape Airbnb listings and can filter results based on your preferences for price, amenities, and ratings. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping Airbnb listings without being blocked. Spreadsheets/Databases:** For storing and comparing property deals. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Data Storage: Configure where you want to store the Airbnb deals. Customize: Specify locations, date ranges, and your budget constraints. Use Cases Travelers:** Find the best accommodation deals for your trips. Digital Nomads:** Track affordable long-term stays in different locations. Property Managers:** Monitor competitor pricing in your area. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #airbnb #travel #brightdata #dealhunting #vacationrentals #traveldeals #accommodationdeals #airbnbdeals #n8nworkflow #workflow #nocode #travelhacks #budgettravel #propertydeals #travelplanning #airbnbscraper #vacationplanning #bestairbnbs #travelautomation #affordableaccommodation #staydeals #traveltech #digitalnomad #accommodationfinder