by Cadu | Ei, Doc!
This n8n template demonstrates how to automate blog post creation with AI and WordPress This workflow is designed for creators who want to maintain an active blog without spending hours writing β while still taking advantage of SEO benefits. It connects OpenAI and WordPress to help you schedule AI-generated posts or create content from simple one- or two-word prompts. π§ Good to know At the time of writing, each AI-generated post will use your OpenAI API credits according to your model and usage tier. This workflow requires an active WordPress site with API access and your OpenAI API key. Setup is quick β in less than 5 minutes, you can have everything running smoothly! βοΈ How it works The workflow connects to your WordPress API and your OpenAI account. You can choose between two modes: Scheduled mode: AI automatically creates and publishes posts based on your defined schedule. Prompt mode: Enter a short phrase (one or two words) and let AI generate a complete SEO-optimized post. The generated content is formatted and published directly to your WordPress blog. You can easily customize prompts, post styles, or scheduling frequency to match your brand and goals. π How to use Start with the Manual Trigger node (as an example) β or replace it with other triggers such as webhooks, cron jobs, or form submissions. Adjust your OpenAI prompts to fine-tune the tone, structure, or SEO focus of your posts. You can also extend this workflow to automatically share posts on social media or send notifications when new articles go live. β Requirements Active OpenAI API key WordPress site** with API access π§© Customising this workflow AI-powered content creation can be adapted for many purposes. Try using it for: Automated content calendars Generating product descriptions Creating newsletter drafts Building SEO-focused blogs effortlessly
by Msaid Mohamed el hadi
Overview This workflow automates the discovery, extraction, enrichment, and storage of business information from Google Maps search queries using AI tools, scrapers, and Google Sheets. It is ideal for: Lead generation agencies Local business researchers Digital marketing firms Automation & outreach specialists π§ Tools & APIs Used Google Maps Search (via HTTP)** Custom JavaScript Parsing** URL Filtering & De-duplication** Google Sheets (Read/Write)** APIFY Actor** for business scraping LangChain AI Agent** (OpenRouter - Gemini 2.5) n8n Built-in Logic** (Loops, Conditions, Aggregators) π§ Workflow Summary Trigger The automation starts via schedule (every hour). Read Queries from Google Sheet Loads unprocessed keywords from a Google Sheet tab named keywords. Loop Through Keywords Each keyword is used to search Google Maps for relevant businesses. Extract URLs JavaScript parses HTML to find all external website URLs from the search results. Clean URLs Filters out irrelevant domains (e.g., Google-owned, example.com, etc.), and removes duplicates. Loop Through URLs For each URL: Checks if it already exists in the Google Sheet (to prevent duplication). Calls the APIFY Actor to extract full business data. Optionally uses AI Agent (Gemini) to provide detailed insight on the business, including: Services, About, Market Position, Weaknesses, AI suggestions, etc. Converts the AI result (text) to a structured JSON object. Save to Google Sheet Adds all extracted and AI-enriched business information to a separate tab (Sheet1). Mark Queries as Processed Updates the original row in keywords to avoid reprocessing. ποΈ Output Fields Saved The following information is saved per business: Business Name, Website, Email, Phone Address, City, Postal Code, Country, Coordinates Category, Subcategory, Services About Us, Opening Hours, Social Media Links Legal Links (Privacy, Terms) Logo, Languages, Keywords AI-Generated Description** Google Maps URL π Use Cases Build a prospect database for B2B cold outreach. Extract local SEO insights per business. Feed CRMs or analytics systems with enriched business profiles. Automate market research for regional opportunity detection. π© Want a Similar Workflow? If youβd like a custom AI-powered automation like this for your business or agency, feel free to contact me: π§ msaidwolfltd@gmail.com
by Sulieman Said
How it Works This workflow automates the process of discovering companies in different cities, extracting their contact data, and storing it in Airtable. City Loop (Airtable β Google Maps API) Reads a list of cities from Airtable. Uses each city combined with a search term (e.g., SEO Agency, Berlin) to query Google Maps. Marks processed cities as βcheckedβ to allow safe restarts if interrupted. Business Discovery & Deduplication Searches for businesses via Google Maps Text Search. Checks Airtable to avoid scraping the same company multiple times. Fetches detailed info for each business via Google Maps Place Details API. Impressum Extraction (Website β HTML Parsing) Builds an Impressum page URL for each business. Requests the HTML and cleans out ads, headers, footers, etc. Extracts relevant contact info using an AI extractor (OpenAI node). Contact Information Extraction Pulls out: Decision Maker (Name + Position in one string, if available). Email address (must be valid, containing @). Phone number (international format if possible). Filters out incomplete results (e.g., empty email). Database Storage Writes company data back into Airtable: Company name Address Website Email Phone number Decision Maker (Name + Position) Search term & city used Setup Steps 1. Prerequisites Google Maps API Key with access to: Places API β Text Search + Place Details Airtable base with at least two tables: Cities (with columns: ID, City, Country, Status) Companies (for scraped results) OpenAI API key (for decision maker + contact extraction). 2. Authentication Configure your Airtable API credentials in n8n. Set up HTTP Query Auth with your Google Maps API key. Add your OpenAI API key in the OpenAI Chat node. 3. Configuration In the Airtable βCitiesβ table, list all cities you want to scrape. Define your search term in the βExecute Workflowβ node (e.g., SEO Agency). Adjust the batch sizes and wait intervals if you want faster/slower scraping (Google API has strict rate limits). 4. Execution Start manually or from another workflow. The workflow will scrape all companies in each city step by step. It can be safely stopped and resumed β cities already marked as processed will be skipped. 5. Results Enriched company dataset stored in Airtable, ready for CRM import, lead generation, or further automation. Tips & Notes Always respect GDPR and local laws when handling scraped data. The workflow is modular β you can swap Airtable with Google Sheets, Notion, or a database of your choice. Add custom filters to limit results (e.g., only companies with websites). Use sticky notes inside the workflow to understand each step (mandatory for template publishing). Keep an eye on Google Places API costs** β queries are billed after the free quota. If you are still within the first 2 months of the Google Cloud Developer free trial, you can benefit from free credits. Questions or custom requests? π© suliemansaid.business@gmail.com
by Ruthwik
π AI-Powered WhatsApp Customer Support for Shopify Brands This n8n template builds a WhatsApp support copilot that answers **order status* and *product availability** from Shopify using LLM "agents," then replies to the customer in WhatsApp or routes to human support. Use cases "Where is my order?" β live status + tracking link "What are your best-selling T-shirts?" β in-stock sizes & variants Greetings / small talk β welcome message Anything unclear β handoff to support channel Good to know WhatsApp Business conversations are billed by Meta/Twilio/Exotel; plan accordingly. Shopify Admin API has rate limits (leaky bucket) --- stagger requests. LLM usage incurs token costs; cap max tokens and enable caching where possible. Avoid sending PII to the model; only pass minimal order/product fields. How it works WhatsApp Trigger\ Receives an incoming message (e.g., "Where is my order?"). Get Customer from Shopify β Customer Details β Normalize Input\ Looks up the customer by phone, formats the query (lower-case, emoji & punctuation normalization). Switch (intent router)\ Classifies into welcome, orderStatusQuery, productQuery, or supportQuery. Welcome path\ Welcome message β polite greeting β (noop placeholder). Order status path (Orders Agent) Orders Agent (LLM + Memory) interprets the user request and extracts needed fields. Get Customer Orders (HTTP to Shopify) fetches the user's latest order(s). Structured Output Parser cleans the agent's output into a strict schema. Send Order Status (WhatsApp message) returns status, ETA, and tracking link. Products path (Products Agent) Products Agent (LLM + Memory) turns the ask into a product query. Get Products from Shopify (HTTP) pulls best sellers / inventory & sizes. Structured Output Parser formats name, price, sizes, stock. Send Products message (WhatsApp) sends a tidy, human-readable reply Support path Send a message to support posts the transcript/context to your agent/helpdesk channel and informs the user a human will respond How to use Replace the manual/WhatsApp trigger with your live WhatsApp number/webhook. Set env vars/credentials: Shopify domain + Admin API token, WhatsApp provider keys, LLM key (OpenAI/OpenRouter), and (optionally) your support channel webhook. Edit message templates for tone, add your brand name, and localize if needed. Test with samples: "Where is my order?", "Show best sellers", "Hi". Requirements WhatsApp Business API (Meta/Twilio/Exotel) Shopify store + Admin API access LLM provider (OpenAI/OpenRouter etc.) Slack webhook for human handoff Prerequisites Active WhatsApp Business Account connected via API provider (Meta, Twilio, or Exotel). Shopify Admin API credentials** (API key, secret, store domain). Slack OAuth app** or webhook for human support escalation. API key for your LLM provider (OpenAI, OpenRouter, etc.). Customising this workflow Add intents: returns/exchanges, COD confirmation, address changes. Enrich product replies with images, price ranges, and "Buy" deep links. Add multilingual support by detecting locale and templating responses. Log all interactions to a DB/Sheet for analytics and quality review. Guardrails: confidence thresholds β fallback to support; redact PII; retry on API errors.
by Daniel Rosehill
Voice Note Context Extraction Pipeline with AI Agent & Vector Storage This n8n template demonstrates how to automatically extract and store contextual information from voice notes using AI agents and vector databases for future retrieval. How it works Webhook trigger** receives voice note data including title, transcript, and timestamp from external services (example here: voicenotes.com) Field extraction** isolates the key data fields (title, transcript, timestamp) for processing AI Context Agent** processes the transcript to extract meaningful context while: Correcting speech-to-text errors Converting first-person references to third-person facts Filtering out casual conversation and focusing on significant information Output formatting** structures the extracted context with timestamps for embedding File conversion** prepares the context data for vector storage Vector embedding** uses OpenAI embeddings to create searchable representations Milvus storage** stores the embedded context for future retrieval in RAG applications How to use Configure the webhook endpoint to receive data from your voice note service Set up credentials for OpenRouter (LLM), OpenAI (embeddings), and Milvus (vector storage) Customize the AI agent's system prompt to match your context extraction needs The workflow automatically processes incoming voice notes and stores extracted context Requirements OpenRouter account for LLM access OpenAI API key for embeddings Milvus vector database (cloud or self-hosted) Voice note service with webhook capabilities (e.g., Voicenotes.com) Customizing this workflow Modify the context extraction prompt** to focus on specific types of information (preferences, facts, relationships) Add filtering logic** to process only voice notes with specific tags or keywords Integrate with other storage** systems like Pinecone, Weaviate, or local vector databases Connect to RAG systems** to use the stored context for enhanced AI conversations Add notification nodes** to confirm successful context extraction and storage Use cases Personal AI assistant** that remembers your preferences and context from voice notes Knowledge management** system for capturing insights from recorded thoughts Content creation** pipeline that extracts key themes from voice recordings Research assistant** that builds context from interview transcripts or meeting notes
by Wassim Abid
Build a fully local RAG chatbot using Ollama that works without tool calling β ideal for smaller open-source models like Qwen that don't support native function calls. This template lets you run a private, self-hosted AI assistant with retrieval-augmented generation using only your own hardware. How it works A Webhook receives the user's chat message A small classifier LLM (Qwen 7B) analyzes the input and decides: is this small talk, or a real question that needs the knowledge base? For small talk, a dedicated AI agent responds conversationally with chat memory For real questions, the classifier generates focused sub-queries, which are sent through a loop-based RAG pipeline: Each sub-query is embedded using BGE-M3 and matched against a Postgres PGVector store Results are filtered by a relevance score threshold (>0.4) Chunks are aggregated and deduplicated across all sub-queries An Answer Generator agent (Qwen 14B) produces a sourced answer using a strict 3-step format: short answer β sources β follow-up question Both paths use Postgres-backed chat memory for multi-turn conversations A post-processing step removes <think> tags that some reasoning models produce Set up steps Install Ollama and pull the required models: ollama pull qwen2.5:7b (classifier + small talk) ollama pull qwen3:14b (answer generation) ollama pull bge-m3 (embeddings) Set up PostgreSQL with the pgvector extension enabled Create your vector store β ingest your documents into the PGVector store using BGE-M3 embeddings (you can use n8n's built-in document loaders for this) Configure credentials in n8n: Ollama connection (default: http://localhost:11434) PostgreSQL connection for both chat memory and vector store Customize the webhook path and connect it to your frontend or API client Optional: Adjust the relevance score threshold, swap models for larger/smaller ones, or modify the system prompts to match your use case
by InfyOm Technologies
β What problem does this workflow solve? Manually entering bank statements into QuickBooks is one of the most time-consuming and error-prone accounting tasks. Accountants often spend hours converting PDF bank statements into individual income and expense entriesβrisking missed transactions, incorrect categorization, and inconsistencies. This workflow fully automates the end-to-end process: from uploading a (even password-protected) bank statement PDF to creating accurate Sales Receipts and Expenses directly inside QuickBooks, using AI and n8n. βοΈ What does this workflow do? Accepts bank statement PDFs via a secure web form Decrypts and extracts text from password-protected PDFs Uses AI to extract structured transactions from raw statement text Validates AI output against a strict JSON schema Processes each transaction individually for reliability Automatically routes transactions based on type: Credits β Income (Sales Receipts) Debits β Expenses Intelligently creates missing QuickBooks entities: Customers Vendors Items Expense categories Posts transactions directly into QuickBooks Eliminates manual accounting entry completely π§ How It Works β End-to-End Flow 1οΈβ£ Secure Bank Statement Upload A user uploads a bank statement PDF (normal or password-protected) using an n8n Form Trigger. 2οΈβ£ PDF Decryption & Text Extraction The workflow: Unlocks the PDF (if password-protected) Extracts the full statement text using the Extract PDF Text node 3οΈβ£ AI-Powered Transaction Extraction An AI Agent reads the raw bank statement text and extracts every transaction with high precision: Transaction type (credit / debit) Date (YYYY-MM-DD)` Amount Description Reference number Payee / counterparty 4οΈβ£ Strict JSON Validation AI output is validated using a Structured Output Parser to ensure: No malformed data Schema-safe transactions Reliable downstream processing 5οΈβ£ Transaction Processing Loop Each transaction is processed individually using batching and loop control to guarantee accuracy. 6οΈβ£ Smart Routing: Credit vs Debit A switch node routes transactions automatically: Credits** β Income flow Debits** β Expense flow π° Credit Path β Income Automation For every credit transaction: Checks if a matching QuickBooks item exists Creates missing service items automatically Finds or creates the customer Builds a Sales Receipt payload Posts the transaction into QuickBooks as income πΈ Debit Path β Expense Automation For every debit transaction: Searches for the vendor by payee name Creates the vendor if missing Loads expense categories from the Chart of Accounts Auto-maps transactions to the correct category using keyword logic Builds a Purchase (Expense) payload Posts the expense into QuickBooks π§ Built-In QuickBooks Intelligence This workflow intelligently handles: Duplicate prevention Missing customer/vendor creation Automatic item mapping Category resolution using transaction descriptions Consistent accounting structure across all entries π Results & Benefits β Zero manual bank statement entry β Works with password-protected PDFs β Handles both income and expenses automatically β Creates clean, structured QuickBooks records β Saves dozens of accounting hours every month β Reduces human error and reconciliation issues π§ Setup Requirements Connect your QuickBooks Online account (Sandbox or Production) Add OpenRouter / AI model credentials for transaction extraction Update the PDF password (if required) in the extraction node Replace company_id in QuickBooks API endpoints Verify QuickBooks account IDs (bank, income, expense) Test with a sample bank statement PDF π€ Who is this for? This workflow is ideal for: π Accountants & bookkeeping firms π’ Businesses managing frequent bank statements πΌ Finance teams using QuickBooks Online π€ Automation-first accounting systems
by Cheng Siong Chin
How It Works This workflow automates performance monitoring by aggregating data from PM tools, code repositories, meeting logs, and CRM systems. It processes team metrics using AI-powered analysis via OpenAI, identifies bottlenecks and workload issues, then creates manager follow-ups and tasks. The system runs weekly, collecting 4 data sources, combining them, analyzing trends, evaluating team capacity, and routing alerts to managers via Gmail. Managers receive structured summaries highlighting performance gaps and required actions. Target audience: Engineering managers and team leads monitoring team velocity, code quality, and capacity planning. Setup Steps Configure credentials: PM Tool API key, Code Repo token, and CRM API key. Set the OpenAI API key. Connect your Gmail account via OAuth. In the Workflow Configuration node, adjust API endpoints and polling intervals. Map data field names to match your tools. Test data fetch nodes using sample queries before deployment. Prerequisites PM tool API access, GitHub/GitLab token, CRM credentials, OpenAI API key, Gmail OAuth connection Use Cases Track engineering team productivity weekly; identify code review bottlenecks; Customization Replace PM tool with Jira/Linear; swap OpenAI for Claude/Gemini; Benefits Reduces manual performance tracking by 6+ hours weekly; provides real-time visibility into team capacity;
by Davide
This workflow is designed to automatically process AI news emails, extract and summarize articles, categorize them, and store the results in a structured Google Sheet for daily tracking and insights. This automated workflow processes a daily AI newsletter from AlphaSignal, extracting individual articles, summarizing them, categorizing them, and saving the results to a Google Sheet. Key Features 1. β Fully Automated Daily News Pipeline No manual work is required β the workflow runs autonomously every time a new email arrives. This eliminates repetitive human tasks such as opening, reading, and summarizing newsletters. 2. β Cross-AI Model Integration It combines multiple AI systems: Google Gemini* and *OpenAI GPT-5 Mini** for natural language processing and categorization. Scrapegraph AI** for external web scraping and summarization. This multi-model approach enhances accuracy and flexibility. 3. β Accurate Content Structuring The workflow transforms unstructured email text into clean, structured JSON data, ensuring reliability and easy export or reuse. 4. β Multi-Language Support The summaries are generated in Italian, which is ideal for local or internal reporting, while the metadata and logic remain in English β enabling global adaptability. 5. β Scalable and Extensible New newsletters, categories, or destinations (like Notion, Slack, or a database) can be added easily without changing the core logic. 6. β Centralized Knowledge Repository By appending to Google Sheets, the team can: Track daily AI developments at a glance. Filter or visualize trends across categories. Use the dataset for further analysis or content creation. 7. β Error-Resilient and Maintainable The JSON validation and loop-based design ensure that if a single article fails, the rest continue to process smoothly. How it Works Email Trigger & Processing: The workflow is automatically triggered when a new email arrives from news@alphasignal.ai. It retrieves the full email content and converts its HTML body into clean Markdown format for easier parsing. Article Extraction & Scraping: A LangChain Agent, powered by Google Gemini, analyzes the newsletter's Markdown text. Its task is to identify and split the content into individual articles. For each article it finds, it outputs a JSON object containing the title, URL, and an initial summary. Crucially, the agent uses the "Scrape" tool to visit each article's URL and generate a more accurate summary in Italian based on the full page content. Data Preparation & Categorization: The JSON output from the previous step is validated and split into individual data items (one per article). Each article is then processed in a loop: Categorization: An OpenAI model analyzes the article's title and summary, assigning it to the most relevant pre-defined category (e.g., "LLM & Foundation Models," "AI Automation & WF"). URL Shortening: The article's link is sent to the CleanURI API to generate a shortened URL. Data Storage: Finally, for each article, a new row is appended to a specified Google Sheet. The row includes the current date, the article's title, the shortened link, the Italian summary, and its assigned category. Set up Steps To implement this workflow, you need to configure the following credentials and nodes in n8n: Email Credentials: Set up a Gmail OAuth2 credential (named "Gmail account" in the workflow) to allow n8n to access and read emails from the specified inbox. AI Model APIs: Google Gemini: Configure the "Google Gemini(PaLM)" credential with a valid API key to power the initial article extraction and scraping agent. OpenAI: Configure the "OpenAi account (Eure)" credential with a valid API key to power the article categorization step. Scraping Tool: Set up the ScrapegraphAI account credential with its required API key to enable the agent to access and scrape content from the article URLs. Google Sheets Destination: Configure the "Google Sheets account" credential via OAuth2. You must also specify the exact Google Sheet ID and sheet name (tab) where the processed article data will be stored. Activation: Once all credentials are tested and correctly configured, the workflow can be activated. It will then run automatically upon receiving a new newsletter from the specified sender. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Martijn Kerver
Description Transform training prescriptions into perfectly formatted Intervals.icu workouts using AI. This workflow automatically converts free-text workout descriptions into structured interval training sessions with proper heart rate zones, pace calculations, and exercise formatting. What this workflow does Collects workout details via a web form (date, title, and workout description) Fetches athlete data from Intervals.icu (FTP, max HR, threshold pace, LTHR) Processes with AI using Claude Opus 4.1 to intelligently parse and format the workout Auto-detects workout type (Run, Ride, Strength, HYROX, CrossFit, etc.) Converts training zones - RPE β HR%, pace calculations, power zones Formats workout structure with proper transitions, rest periods, circuit formatting Creates the workout in Intervals.icu via API Use cases Coaches**: Convert training plans from documents/spreadsheets into Intervals.icu format Athletes**: Quickly add structured workouts from coaching apps or training programs Hybrid training**: Handle complex HYROX, CrossFit, or multi-sport sessions with circuit formatting Time savings**: Eliminate manual workout entry and zone calculations Supported workout types Running, cycling, swimming, strength training, HYROX, CrossFit, indoor rowing, virtual training (Zwift), triathlon, and more. Key features β Intelligent workout type detection β Automatic RPE to HR zone conversion using athlete-specific data β Proper formatting for intervals, circuits, supersets, and progressions β Adds transitions between exercises/machines β Calculates exercise durations and pacing β Handles warmup/cooldown sections β Generates unique workout IDs Setup requirements Intervals.icu account** with API access (API key required) Anthropic API key** for Claude AI Athlete must have training zones configured in Intervals.icu (FTP, max HR, LTHR, threshold pace) Setup instructions Getting your Intervals.icu API key Log in to Intervals.icu Go to Settings (gear icon) β Developer Settings Click Generate API Key (or copy your existing key) Save the API key securely Configuring credentials in n8n For Intervals.icu (HTTP Basic Auth): In n8n, open the GetAthleteInfo or CreateWorkoutAPI node Click on Credentials β Create New Credential Select HTTP Basic Auth Enter: Username: API_KEY (literally type "API_KEY") Password: Your actual API key from Intervals.icu Click Save Apply this credential to both HTTP Request nodes For Anthropic: Open the Anthropic Chat Model node Click on Credentials β Create New Credential Enter your Anthropic API key Click Save Important: The Intervals.icu API uses HTTP Basic Authentication where the username is always the literal string "API_KEY" and the password is your actual API key. How it works The workflow uses a sophisticated AI agent with a detailed system prompt that understands training terminology, zones, and Intervals.icu formatting requirements. It applies sport-specific rules to ensure workouts are properly structured for tracking during training sessions.
by Fariez
X (Twitter) and Threads (by Meta) both have different maximum character lengths. Different X and Threads Content Auto Poster This n8n template demonstrates how to post different content optimized for X (Twitter) and Meta Threads using the Late API. You can use it for any niche. For example: posting AI news to X and Threads. Possible use cases: Schedule your posts to X and Threads. Use this workflow as a content calendar and automated posting system. Apply it across different content niches. How it works The automation runs according to the time defined in the Schedule Trigger node. Content is pulled from Google Sheets. Any URL is shortened using your preferred short URL API. Images are uploaded to Lateβs server first. Content for X is posted in Step 2. The workflow checks that the content length is under 280 characters. Content for Threads is posted in Step 3. The workflow checks that the content length is under 500 characters. Posts on X are published as threaded posts, while on Threads they are single posts. Once posted, the Google Sheets content database is updated. Requirements Google OAuth credentials with the Google Sheets API enabled Bitly account and access token (or OAuth) GetLate API connected to your X and Threads accounts HOW TO USE STEP 1 Adjust the settings in the Schedule Trigger node to define when the workflow runs. Open this Google Sheets template, then go to File β Make a copy, and update the settings in the Get Topic node. Get your Bitly OAuth or Access Token here and add the credentials in the Short Link node. Get your API key from getlate.dev and add the credentials in the Upload IMG node. STEP 2 Add your Late credentials to the Post Twitter node. Get your Twitter account ID from Late, and update it in the JSON Body section of the Post Twitter node. STEP 3 Add your Late credentials to the Post Threads node. Get your Threads account ID from Late, and update it in the JSON Body section of the Post Threads node.
by Fabian Herhold
Whoβs it for Recruiting agencies, executive search firms, and in-house talent teams that want to automate candidate sourcing and prequalification. Instead of spending hours searching, scoring, and writing outreach, this workflow turns any job description into a ready-to-use shortlist with personalized messages. Youtube Walkthrough What it does (How it works) This workflow takes a job description (title, description, and location) and runs a complete recruiting automation pipeline: Normalize job titles** and generate variations to widen search coverage. Search candidates** in Apollo (or your CRM / database of choice). Remove duplicates** to keep clean lists. Score candidates** with AI (0β5) and provide concise reasoning across experience, industry, and seniority. Enrich LinkedIn profiles** (name, title, image, location, experience). Create structured candidate assessments** (summary, alignment, red flags, positives). Generate outreach messages** (email + LinkedIn DM) tailored to the candidate. Write to Airtable** for job/candidate tracking and downstream automation. Everything is plug-and-play, with no manual searching or copy-pasting required. Requirements n8n (Cloud or self-hosted) Airtable account + API access Apollo API or your preferred candidate source LLM provider: OpenAI or Anthropic LinkedIn enrichment API (RapidAPI, Apify, etc.) > β οΈ Do not hardcode API keys in HTTP nodes. Always use Credentials in n8n. Airtable table specifications Create one base (e.g., Candidate Search β From Job Description) with two tables: Jobs Table Job Title (text) Job Description (long text) Job Location (text) Candidates (linked to Candidates table) Candidates Table Core fields: Name, LinkedIn URL, Job Title, Location, Image URL, Job Searches (linked) Assessment fields: Summary Fit Score, Executive Summary, Title Alignment, Skill Alignment, Industry Alignment, Seniority Alignment, Company Type Alignment, Educational Alignment, Potential Red Flags, Positive Signals, Final Recommendation, Next Steps Suggestion Outreach fields: Email Subject, Email Body, LinkedIn Message How to set up Connect credentials Add Airtable, Apollo/CRM, and OpenAI/Anthropic credentials under n8n Credentials. Create Airtable base/tables Follow the above spec for Jobs and Candidates. Match field names exactly to avoid mapping errors. Configure the trigger The workflow starts from a Form/Webhook node. It captures: Job Title (required) Job Description (required) Location (required) Target Companies (optional, comma-separated domains) Job title mutation The workflow uses an AI node to normalize the job title and generate up to 5 variations for broader candidate searches. Candidate search Apollo (or your CRM API) is queried with the generated titles and location filters. Results are deduped. AI scoring & structuring Candidates are scored 0β5 with clear reasoning (experience, industry, seniority, general fit). Profiles are formatted into structured JSON for Airtable. LinkedIn enrichment Enrichment API fetches missing data (geo, image, job history). Candidate assessment An AI model produces a full recruiter-ready evaluation (fit summary, strengths, red flags). Outreach generation The workflow drafts a concise cold email (<75 words) and LinkedIn DM (<60 words), consultative in tone. Write to Airtable All jobs and candidates (with assessments and outreach messages) are logged for review and integration. How to customize Swap Apollo with your CRM** (Greenhouse, Bullhorn, etc.). Adjust scoring prompts** to match your niche (sales, engineering, healthcare). Add custom filters** for target companies or industries. Change outreach tone** to align with your brand voice. Limit by score** (e.g., only push candidates with score β₯4). Security & best practices Store all keys in n8n Credentials (never in nodes). Use Set nodes to centralize editable variables (title, location, filters). Always add sticky notes in your workflow explaining steps. Rename nodes clearly for readability. Troubleshooting No candidates found?** Loosen title variations or broaden location. Low fit scores?** Refine keywords and required skills in scoring prompts. Airtable errors?** Double-check Base ID, Table ID, and field names. API rate limits?** Enable batching/pagination and increase intervals. SEO title: Build candidate shortlists from a job description to Airtable with Apollo, AI scoring, and personalized outreach Keywords: recruiting automation, Apollo people search, candidate enrichment, AI scoring, Airtable recruiting CRM, LinkedIn outreach, n8n workflow template