by Nguyen Thieu Toan
How it works 🧠 AI-Powered News Update Bot for Zalo using Gemini and RSS Feeds This workflow allows you to build a smart Zalo chatbot that automatically summarizes and delivers the latest news using Google Gemini and RSS feeds. It’s perfect for keeping users informed with AI-curated updates directly inside Vietnam’s most popular messaging app. 🚀 What It Does Receives user messages via Zalo Bot webhook Fetches the latest articles from an RSS feed (e.g., AI news) Summarizes the content using Google Gemini Formats the response and sends it back to the user on Zalo 📱 What Is Zalo? Zalo is Vietnam’s leading instant messaging app, with over 78 million monthly active users—more than 85% of the country’s internet-connected population. It handles 2 billion messages per day and is deeply embedded in Vietnamese daily life, making it a powerful channel for communication and automation. 🔧 Setup Instructions 1. Create a Zalo Bot Open the Zalo app and search for "Zalo Bot Creator" Tap "Create Zalo Bot Account" Your bot name must start with "Bot" (e.g., Bot AI News) After creation, Zalo will send you a message containing your Bot Token 2. Configure the Webhook Replace [your-webhook URL] in Zalo Bot Creator with your n8n webhook URL Use the Webhook node in this workflow to receive incoming messages 3. Set Up Gemini Add your Gemini API key to the HTTP Request node labeled Summarize AI News Customize the prompt if you want a different tone or summary style 4. Customize RSS Feed Replace the default RSS URL with your preferred news source You can use any feed that provides timely updates (e.g., tech, finance, health) 🧪 Example Interaction User: "What's new today?" Bot: "🧠 AI Update: Google launches Gemini 2 with multimodal capabilities, revolutionizing how models understand text, image, and code..." ⚠️ Notes Zalo Bots currently do not support images, voice, or file attachments Make sure your Gemini API key has access to the model you're calling RSS feeds should be publicly accessible and well-formatted 🧩 Nodes Used Webhook HTTP Request (Gemini) RSS Feed Read Set & Format Zalo Message Sender (via API) 💡 Tips You can swap Gemini with GPT-4 or Claude by adjusting the API call Add filters to the RSS node to only include articles with specific keywords Use the Function node to personalize responses based on user history Built by Nguyen Thieu Toan (Nguyễn Thiệu Toàn) (https://nguyenthieutoan.com). Read more about this workflow by Vietnamese: https://nguyenthieutoan.com/share-workflow-n8n-zalo-bot-cap-nhat-tin-tuc/
by franck fambou
⚠️ IMPORTANT: This template requires self-hosted n8n hosting due to the use of community nodes (MCP tools). It will not work on n8n Cloud. Make sure you have access to a self-hosted n8n instance before using this template. Overview This workflow automation allows a Google Gemini-powered AI Agent to orchestrate multi-source web intelligence using MCP (Model Context Protocol) tools such as Firecrawl, Brave Search, and Apify. The system allows users to interact with the agent in natural language, which then leverages various external data collection tools, processes the results, and automatically organizes them into structured spreadsheets. With built-in memory, flexible tool execution, and conversational capabilities, this workflow acts as a multi-agent research assistant, capable of retrieving, synthesizing, and delivering actionable insights in real time. How the system works AI Agent + MCP Pipeline User Interaction A chat message is received and forwarded to the AI Agent. AI Orchestration The agent, powered by Google Gemini, decides which MCP tools to invoke based on the query. Firecrawl-MCP: Recursive web crawling and content extraction. Brave-MCP: Real-time web search with structured results. Apify-MCP: Automation of web scraping tasks with scalable execution. Memory Management A memory module stores context across conversations, ensuring multi-turn reasoning and task continuity. Spreadsheet automation Results are structured in a new, automatically created Google Spreadsheet, enriched with formatting and additional metadata. Data processing The workflow generates the spreadsheet content, updates the sheet, and improves results via HTTP requests and field edits. Delivery of results Users receive a structured and contextualized dataset ready for review, analysis, or integration into other systems. Configuration instructions Estimated setup time: 45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher recommended) Google Gemini API key MCP-compatible nodes (Firecrawl, Brave, Apify) configured Google Sheets credentials for spreadsheet automation Detailed configuration steps Step 1: Configuring the AI Agent AI Agent node**: Select Google Gemini as the LLM model Configure your Google Gemini API key in the n8n credentials Set the system prompt to guide the agent's behavior Connect the Simple Memory node to enable context tracking Step 2: Integrating MCP Tools Firecrawl-MCP Configuration**: Install the @n8n/n8n-nodes-firecrawl-mcp package Configure your Firecrawl API key Set crawling parameters (depth, CSS selectors) Brave-MCP configuration**: Install the @n8n/n8n-nodes-brave-mcp package Add your Brave Search API key Configure search filters (region, language, SafeSearch) Apify-MCP configuration**: Install the @n8n/n8n-nodes-apify-mcp package Configure your Apify credentials Select the appropriate actors for your use cases Step 3: Spreadsheet automation “Create Spreadsheet” node**: Configure Google Sheets authentication (OAuth2 or Service Account) Set the file name with dynamic timestamps Specify the destination folder in Google Drive “Generate Spreadsheet Content” node**: Transform the agent's outputs into tabular format Define the columns: URL, Title, Description, Source, Timestamp Configure data formatting (dates, links, metadata) “Update Spreadsheet” node**: Insert the data into the created sheet Apply automatic formatting (headers, colors, column widths) Add summary formulas if necessary Step 4: Post-processing and delivery “Data Enrichment Request” node** (formerly “HTTP Request1”): Configure optional API calls to enrich the data Add additional metadata (geolocation, sentiment, categorization) Manage errors and timeouts “Edit Fields” node**: Refine the final dataset (metadata, tags, filters) Clean and normalize the data Prepare the final response for the user Structure of generated Google Sheets Default columns | Column | Description | Type | |---------|-------------|------| | URL | Data source URL | Hyperlink | | Title | Page/resource title | Text | | Description | Description or content excerpt | Long text | | Source | MCP tool used (Brave/Firecrawl/Apify) | Text | | Timestamp | Date/time of collection | Date/Time | | Metadata | Additional data (JSON) | Text | Automatic formatting Headings**: Bold font, colored background URLs**: Formatted as clickable links Dates**: Standardized ISO 8601 format Columns**: Width automatically adjusted to content Use cases Business and enterprise Competitive analysis combining search, crawling, and structured scraping Market trend research with multi-source aggregation Automated reporting pipelines for business intelligence Research and academia Literature discovery across multiple sources Data collection for research projects Automated bibliographic extraction from online sources Engineering and development Discovery of APIs and documentation Aggregation of product information from multiple platforms Scalable structured scraping for datasets Personal productivity Automated creation of newsletters or knowledge hubs Personal research assistant compiling spreadsheets from various online data Key features Multi-source intelligence Firecrawl for deep crawling Brave for real-time search Apify for structured web scraping AI-driven orchestration Google Gemini for reasoning and tool selection Memory for multi-turn interactions Context-based adaptive workflows Structured data output Automatic spreadsheet creation Data enrichment and formatting Ready-to-use datasets for reporting Performance and scalability Handles multiple simultaneous tool calls Scalable web data extraction Real-time aggregation from multiple MCPs Security and privacy Secure authentication based on API keys Data managed in Google Sheets / n8n Configurable retention and deletion policies Technical architecture Workflow User query → AI agent (Gemini) → MCP tools (Firecrawl / Brave / Apify) → Aggregated results → Spreadsheet creation → Data processing → Results delivery Supported data types Text and metadata** from crawled web pages Search results** from Brave queries Structured data** from Apify scrapers Tabular reports** via Google Sheets Integration options Chat interfaces Web widget for conversational queries Slack/Teams chatbot integration REST API access points Data sources Websites (via Firecrawl/Apify) Search engines (via Brave) APIs (via HTTP Request enrichment) Performance specifications Query response: < 5 seconds (search tasks) Crawl capacity: Thousands of pages per run Spreadsheet automation: Real-time creation and updates Accuracy: > 90% when using combined sources Advanced configuration options Customization Set custom prompts for the AI Agent Adjust the spreadsheet schema for reporting needs Configure retries for failed tool runs Analytics and monitoring Track tool usage and costs Monitor crawl and search success rates Log queries and outputs for auditing Troubleshooting and support Timeouts:** Manually re-run failed MCP executions Data gaps:** Validate Firecrawl/Apify selectors Spreadsheet errors:** Check Google Sheets API quotas
by Ranjan Dailata
Disclaimer Please note - This workflow is only available on n8n self-hosted as it's making use of the community node for the Decodo Web Scraping This workflow automates intelligent keyword and topic extraction from Google Search results, combining Decodo’s advanced scraping engine with OpenAI GPT-4.1-mini’s semantic analysis capabilities. The result is a fully automated keyword enrichment pipeline that gathers, analyzes, and stores SEO-relevant insights. Who this is for This workflow is ideal for: SEO professionals** who want to extract high-value keywords from competitors. Digital marketers** aiming to automate topic discovery and keyword clustering. Content strategists** building data-driven content calendars. AI automation engineers** designing scalable web intelligence and enrichment pipelines. Growth teams** performing market and search intent research with minimal effort. What problem this workflow solves Manual keyword research is time-consuming and often incomplete. Traditional keyword tools only provide surface-level data and fail to uncover contextual topics or semantic relationships hidden in search results. This workflow solves that by: Automatically scraping live Google Search results for any keyword. Extracting meaningful topics, related terms, and entities using AI. Enriching your keyword list with semantic intelligence to improve SEO and content planning. Storing structured results directly in n8n Data Tables for trend tracking or export. What this workflow does Here’s a breakdown of the flow: Set the Input Fields – Define your search query and target geo (e.g., “Pizza” in “India”). Decodo Google Search – Fetches organic search results using Decodo’s web scraping API. Return Organic Results – Extracts the list of organic results and passes them downstream. Loop Over Each Result – Iterates through every search result description. Extract Keywords and Topics – Uses OpenAI GPT-4.1-mini to identify relevant keywords, entities, and thematic topics from each snippet. Data Enrichment Logic – Checks whether each result already exists in the n8n Data Table (based on URL). Insert or Skip – If a record doesn’t exist, inserts the extracted data into the table. Store Results – Saves both enriched search data and Decodo’s original response to disk. End Result: A structured and deduplicated dataset containing URLs, keywords, and key topics — ready for SEO tracking or further analytics. Setup Pre-requisite If you are new to Decode, please signup on this link visit.decodo.com Please make sure to install the n8n custom node for Decodo. Import and Configure the Workflow Open n8n and import the JSON template. Add your credentials: Decodo API Key under Decodo Credentials account. OpenAI API Key under OpenAI Account. Define Input Parameters Modify the Set node to define: search_query: your keyword or topic (e.g., “AI tools for marketing”) geo: the target region (e.g., “United States”) Configure Output The workflow writes two outputs: Enriched keyword data → Stored in n8n Data Table (DecodoGoogleSearchResults). Raw Decodo response → Saved locally in JSON format. Execute Click Execute Workflow or schedule it for recurring keyword enrichment (e.g., weekly trend tracking). How to customize this workflow Change AI Model** — Replace gpt-4.1-mini with gemini-1.5-pro or claude-3-opus for testing different reasoning strengths. Expand the Schema** — Add extra fields like keyword difficulty, page type, or author info. Add Sentiment Analysis** — Chain a second AI node to assess tone (positive, neutral, or promotional). Export to Sheets or DB** — Replace the Data Table node with Google Sheets, Notion, Airtable, or MySQL connectors. Multi-Language Research** — Pass a locale parameter in the Decodo node to gather insights in specific languages. Automate Alerts** — Add a Slack or Email node to notify your team when high-value topics appear. Summary Search & Enrich is a low-code AI-powered keyword intelligence engine that automates research and enrichment for SEO, content, and digital marketing. By combining Decodo’s real-time SERP scraping with OpenAI’s contextual understanding, the workflow transforms raw search results into structured, actionable keyword insights. It eliminates repetitive research work, enhances content strategy, and keeps your keyword database continuously enriched — all within n8n.
by Davide
This is an exaple of advanced automated data extraction and enrichment pipeline with ScrapeGraphAI. Its primary purpose is to systematically scrape the n8n community workflows website, extract detailed information about recently added workflows, process that data using multiple AI models, and store the structured results in a Google Sheets spreadsheet. This workflow demonstrates a sophisticated use of n8n to move beyond simple API calls and into the realm of intelligent, AI-driven web scraping and data processing, turning unstructured website content into valuable, structured business intelligence. Key Advantages ✅ Full Automation: Once triggered (manually or on a schedule via the Schedule Trigger node), the entire process runs hands-free, from data collection to spreadsheet population. ✅ Powerful AI-Augmented Scraping: It doesn't just scrape raw HTML. It uses multiple AI agents (Google Gemini, OpenAI) to: Understand page structure to find the right data on the main list. Clean and purify content from individual pages, removing and irrelevant information. Perform precise information extraction to parse unstructured text into structured JSON data based on a defined schema (author, price, etc.). Generate intelligent summaries, adding significant value by explaining the workflow's purpose in Italian. ✅ Robust and Structured Data Output: The use of the Structured Output Parser and Information Extractor nodes ensures the data is clean, consistent, and ready for analysis. It outputs perfectly formatted JSON that maps directly to spreadsheet columns. ✅ Scalability via Batching: The Split In Batches and Loop Over Items nodes allow the workflow to process a dynamically sized list of workflows. Whether there are 5 or 50 new workflows, it will process each one sequentially without failing. ✅ Effective Data Integration: It seamlessly integrates with Google Sheets, acting as a simple and powerful database. This makes the collected data immediately accessible, shareable, and available for visualization in tools like Looker Studio. ✅ Resilience to Website Changes: By using AI models trained to understand content and context (like "find the 'Recently Added' section" or "find the author's name"), the workflow is more resilient to minor cosmetic changes on the target website compared to traditional CSS/XPath selectors. How It Works The workflow operates in two main phases: Phase 1: Scraping the Main List Trigger: The workflow can be started manually ("Execute Workflow") or automatically on a schedule. Scraping: The "Scrape main page" node (using ScrapeGraphAI) fetches and converts the https://n8n.io/workflows/ page into clean Markdown format. Data Extraction: An LLM chain ("Extract 'Recently added'") analyzes the Markdown. It is specifically instructed to identify all workflow titles and URLs within the "Recently Added" section and output them as a structured JSON array named workflows. Data Preparation: The resulting array is set as a variable and then split out into individual items, preparing them for processing one-by-one. Phase 2: Processing Individual Workflows Loop: The "Loop Over Items" node iterates through each workflow URL obtained from Phase 1. Scrape & Clean Detail Page: For each URL, the "Scrape single Workflow" node fetches the detail page. Another LLM chain ("Main content") cleans the resulting Markdown, removing superfluous content and focusing only on the core article text. Information Extraction: The cleaned Markdown is passed to an "Information Extractor" node. This uses a language model to locate and structure specific data points (title, URL, ID, author, categories, price) into a defined JSON schema. Summarization: The cleaned Markdown is also sent to a Google Gemini node ("Summarization content"), which generates a concise Italian summary of the workflow's purpose and tools used. Data Consolidation & Export: The extracted information and the generated summary are merged into a single data object. Finally, the "Add row" node maps all this data to the appropriate columns and appends it as a new row in a designated Google Sheet. Set Up Steps To run this workflow, you need to configure the following credentials in your n8n instance: ScrapeGraphAI Account: The "Scrape main page" and "Scrape single Workflow" nodes require valid ScrapeGraphAI API credentials named ScrapegraphAI account. Install the related Community node. Google Gemini Account: Multiple nodes ("Google Gemini Chat Model", "Summarization content", etc.) require API credentials for Google Gemini named Google Gemini(PaLM) (Eure). OpenAI Account: The "OpenAI Chat Model1" node requires API credentials for OpenAI named OpenAi account (Eure). Google Sheets Account: The "Add row" node requires OAuth2 credentials for Google Sheets named Google Sheets account. You must also ensure the node is configured with the correct Google Sheet ID and that the sheet has a worksheet named Foglio1 (or update the node to match your sheet's name). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Cadu | Ei, Doc!
This n8n template demonstrates how to automate blog post creation with AI and WordPress This workflow is designed for creators who want to maintain an active blog without spending hours writing — while still taking advantage of SEO benefits. It connects OpenAI and WordPress to help you schedule AI-generated posts or create content from simple one- or two-word prompts. 🧠 Good to know At the time of writing, each AI-generated post will use your OpenAI API credits according to your model and usage tier. This workflow requires an active WordPress site with API access and your OpenAI API key. Setup is quick — in less than 5 minutes, you can have everything running smoothly! ⚙️ How it works The workflow connects to your WordPress API and your OpenAI account. You can choose between two modes: Scheduled mode: AI automatically creates and publishes posts based on your defined schedule. Prompt mode: Enter a short phrase (one or two words) and let AI generate a complete SEO-optimized post. The generated content is formatted and published directly to your WordPress blog. You can easily customize prompts, post styles, or scheduling frequency to match your brand and goals. 🚀 How to use Start with the Manual Trigger node (as an example) — or replace it with other triggers such as webhooks, cron jobs, or form submissions. Adjust your OpenAI prompts to fine-tune the tone, structure, or SEO focus of your posts. You can also extend this workflow to automatically share posts on social media or send notifications when new articles go live. ✅ Requirements Active OpenAI API key WordPress site** with API access 🧩 Customising this workflow AI-powered content creation can be adapted for many purposes. Try using it for: Automated content calendars Generating product descriptions Creating newsletter drafts Building SEO-focused blogs effortlessly
by AbSa~
🚀 Overview This workflow automates video uploads from Telegram directly to Google Drive, complete with smart file renaming, Google Sheets logging, and AI assistance via Google Gemini. It’s perfect for creators, educators, or organizations that want to streamline video submissions and file management. ⚙️ How It Works Telegram Trigger -> Start the workflow when a user sends a video file to your Telegram bot. Switch Node -> Detects file type or command and routes the flow accordingly. Get File -> Downloads the Telegram video file. Upload to Google Drive -> Automatically uploads the video to your chosen Drive folder. Smart Rename -> The file name is auto-formatted using dynamic logic (date, username, or custom tags). Google Sheets Logging -> Appends or updates upload data (e.g., filename, sender, timestamp) for easy tracking. AI Agent Integration -> Uses Google Gemini AI connected to Data Vidio memory to analyze or respond intelligently to user queries. Telegram Notification -> Sends confirmation or status messages back to Telegram. 🧠 Highlights Seamlessly integrates Telegram → Google Drive → Google Sheets → Gemini AI Supports file update or append mode Auto-rename logic via the Code node Works with custom memory tools for smarter AI responses Easy to clone and adapt, just connect your own credentials 🪄 Ideal Use Cases Video assignment submissions for schools or academies Media upload management for marketing teams Automated video archiving and AI-assisted review Personal Telegram-to-Drive backup assistant 🧩 Setup Tips Copy and use the provided Google Sheet template (SheetTemplate) Configure your Telegram Bot token, Google Drive, and Sheets credentials Update the AI Agent node with your Gemini API key and connect the Data Vidio sheet Test with a sample Telegram video before full automation
by Masaki Go
About This Template This workflow creates high-quality, text-rich advertising banners from simple LINE messages. It combines Google Gemini (for marketing-focused prompt engineering) and Nano Banana Pro (accessed via Kie.ai API) to generate images with superior text rendering capabilities. It also handles the asynchronous API polling required for high-quality image generation. How It Works Input: Users send a banner concept via LINE (e.g., "Coffee brand, morning vibe"). Prompt Engineering: Gemini optimizes the request into a detailed prompt, specifying lighting, composition, and Japanese catch-copy placement. Async Generation: The workflow submits a job to Nano Banana Pro (Kie API) and intelligently waits/polls until the image is ready. Hosting: The final image is downloaded and uploaded to a public AWS S3 bucket. Delivery: The image is pushed back to the user on LINE. Who It’s For Marketing teams creating A/B test assets. Japanese market advertisers needing accurate text rendering. Developers looking for an example of Async API Polling patterns in n8n. Requirements n8n** (Cloud or Self-hosted). Kie.ai API Key** (for Nano Banana Pro model). Google Gemini API Key**. AWS S3 Bucket** (Public access enabled). LINE Official Account** (Messaging API). Setup Steps Credentials: Configure the "Header Auth" credential for the Kie.ai nodes (Header: Authorization, Value: Bearer YOUR_API_KEY). AWS: Ensure your S3 bucket allows public read access so LINE can display the image. Webhook: Add the production webhook URL to your LINE Developers console.
by Cheng Siong Chin
Introduction This workflow connects to OpenAI, Anthropic, and Groq, processing requests in parallel with automatic performance metrics. Ideal for testing speed, cost, and quality across models. How It Works Webhooks trigger parameter extraction and routing. Three AI agents run simultaneously with memory and parsing. Responses merge with detailed metrics. Workflow Template Webhook → Extract Parameters → Router ├→ OpenAI Agent ├→ Anthropic Agent ├→ Groq Agent → Merge → Metrics → Respond Workflow Steps Webhook receives POST with prompt and settings. Parameters extracted and validated. Router directs by cost, latency, or type. AI agents run in parallel. Results merged with metadata. Metrics compute time, cost, and quality. Response returns outputs and recommendation. Setup Instructions Activate Webhook with authentication. Add API keys for all providers. Define models, tokens, and temperature. Adjust Router logic for selection. Tune Metrics scoring formulas. Prerequisites n8n v1.0+ instance API keys: OpenAI, Anthropic, Groq HTTP client for testing Customization Add providers like Gemini or Azure OpenAI. Enable routing by cost or performance. Benefits Auto-select efficient providers and compare model performance in real time.
by Martijn Kerver
Description Transform training prescriptions into perfectly formatted Intervals.icu workouts using AI. This workflow automatically converts free-text workout descriptions into structured interval training sessions with proper heart rate zones, pace calculations, and exercise formatting. What this workflow does Collects workout details via a web form (date, title, and workout description) Fetches athlete data from Intervals.icu (FTP, max HR, threshold pace, LTHR) Processes with AI using Claude Opus 4.1 to intelligently parse and format the workout Auto-detects workout type (Run, Ride, Strength, HYROX, CrossFit, etc.) Converts training zones - RPE → HR%, pace calculations, power zones Formats workout structure with proper transitions, rest periods, circuit formatting Creates the workout in Intervals.icu via API Use cases Coaches**: Convert training plans from documents/spreadsheets into Intervals.icu format Athletes**: Quickly add structured workouts from coaching apps or training programs Hybrid training**: Handle complex HYROX, CrossFit, or multi-sport sessions with circuit formatting Time savings**: Eliminate manual workout entry and zone calculations Supported workout types Running, cycling, swimming, strength training, HYROX, CrossFit, indoor rowing, virtual training (Zwift), triathlon, and more. Key features ✅ Intelligent workout type detection ✅ Automatic RPE to HR zone conversion using athlete-specific data ✅ Proper formatting for intervals, circuits, supersets, and progressions ✅ Adds transitions between exercises/machines ✅ Calculates exercise durations and pacing ✅ Handles warmup/cooldown sections ✅ Generates unique workout IDs Setup requirements Intervals.icu account** with API access (API key required) Anthropic API key** for Claude AI Athlete must have training zones configured in Intervals.icu (FTP, max HR, LTHR, threshold pace) Setup instructions Getting your Intervals.icu API key Log in to Intervals.icu Go to Settings (gear icon) → Developer Settings Click Generate API Key (or copy your existing key) Save the API key securely Configuring credentials in n8n For Intervals.icu (HTTP Basic Auth): In n8n, open the GetAthleteInfo or CreateWorkoutAPI node Click on Credentials → Create New Credential Select HTTP Basic Auth Enter: Username: API_KEY (literally type "API_KEY") Password: Your actual API key from Intervals.icu Click Save Apply this credential to both HTTP Request nodes For Anthropic: Open the Anthropic Chat Model node Click on Credentials → Create New Credential Enter your Anthropic API key Click Save Important: The Intervals.icu API uses HTTP Basic Authentication where the username is always the literal string "API_KEY" and the password is your actual API key. How it works The workflow uses a sophisticated AI agent with a detailed system prompt that understands training terminology, zones, and Intervals.icu formatting requirements. It applies sport-specific rules to ensure workouts are properly structured for tracking during training sessions.
by Fabian Herhold
Who’s it for Recruiting agencies, executive search firms, and in-house talent teams that want to automate candidate sourcing and prequalification. Instead of spending hours searching, scoring, and writing outreach, this workflow turns any job description into a ready-to-use shortlist with personalized messages. Youtube Walkthrough What it does (How it works) This workflow takes a job description (title, description, and location) and runs a complete recruiting automation pipeline: Normalize job titles** and generate variations to widen search coverage. Search candidates** in Apollo (or your CRM / database of choice). Remove duplicates** to keep clean lists. Score candidates** with AI (0–5) and provide concise reasoning across experience, industry, and seniority. Enrich LinkedIn profiles** (name, title, image, location, experience). Create structured candidate assessments** (summary, alignment, red flags, positives). Generate outreach messages** (email + LinkedIn DM) tailored to the candidate. Write to Airtable** for job/candidate tracking and downstream automation. Everything is plug-and-play, with no manual searching or copy-pasting required. Requirements n8n (Cloud or self-hosted) Airtable account + API access Apollo API or your preferred candidate source LLM provider: OpenAI or Anthropic LinkedIn enrichment API (RapidAPI, Apify, etc.) > ⚠️ Do not hardcode API keys in HTTP nodes. Always use Credentials in n8n. Airtable table specifications Create one base (e.g., Candidate Search – From Job Description) with two tables: Jobs Table Job Title (text) Job Description (long text) Job Location (text) Candidates (linked to Candidates table) Candidates Table Core fields: Name, LinkedIn URL, Job Title, Location, Image URL, Job Searches (linked) Assessment fields: Summary Fit Score, Executive Summary, Title Alignment, Skill Alignment, Industry Alignment, Seniority Alignment, Company Type Alignment, Educational Alignment, Potential Red Flags, Positive Signals, Final Recommendation, Next Steps Suggestion Outreach fields: Email Subject, Email Body, LinkedIn Message How to set up Connect credentials Add Airtable, Apollo/CRM, and OpenAI/Anthropic credentials under n8n Credentials. Create Airtable base/tables Follow the above spec for Jobs and Candidates. Match field names exactly to avoid mapping errors. Configure the trigger The workflow starts from a Form/Webhook node. It captures: Job Title (required) Job Description (required) Location (required) Target Companies (optional, comma-separated domains) Job title mutation The workflow uses an AI node to normalize the job title and generate up to 5 variations for broader candidate searches. Candidate search Apollo (or your CRM API) is queried with the generated titles and location filters. Results are deduped. AI scoring & structuring Candidates are scored 0–5 with clear reasoning (experience, industry, seniority, general fit). Profiles are formatted into structured JSON for Airtable. LinkedIn enrichment Enrichment API fetches missing data (geo, image, job history). Candidate assessment An AI model produces a full recruiter-ready evaluation (fit summary, strengths, red flags). Outreach generation The workflow drafts a concise cold email (<75 words) and LinkedIn DM (<60 words), consultative in tone. Write to Airtable All jobs and candidates (with assessments and outreach messages) are logged for review and integration. How to customize Swap Apollo with your CRM** (Greenhouse, Bullhorn, etc.). Adjust scoring prompts** to match your niche (sales, engineering, healthcare). Add custom filters** for target companies or industries. Change outreach tone** to align with your brand voice. Limit by score** (e.g., only push candidates with score ≥4). Security & best practices Store all keys in n8n Credentials (never in nodes). Use Set nodes to centralize editable variables (title, location, filters). Always add sticky notes in your workflow explaining steps. Rename nodes clearly for readability. Troubleshooting No candidates found?** Loosen title variations or broaden location. Low fit scores?** Refine keywords and required skills in scoring prompts. Airtable errors?** Double-check Base ID, Table ID, and field names. API rate limits?** Enable batching/pagination and increase intervals. SEO title: Build candidate shortlists from a job description to Airtable with Apollo, AI scoring, and personalized outreach Keywords: recruiting automation, Apollo people search, candidate enrichment, AI scoring, Airtable recruiting CRM, LinkedIn outreach, n8n workflow template
by plemeo
Who’s it for Growth hackers, community builders, and marketers who want to keep their Twitter (X) accounts active by liking posts from selected profiles automatically. How it works / What it does Schedule Trigger fires hourly. Profile Post Extractor fetches up to 20 tweets for each profile in your CSV. Select Cookie rotates Twitter session-cookies. Get Random Post checks against twitter_posts_already_liked.csv. Builds twitter_posts_to_like.csv, uploads to SharePoint. Phantombuster Autolike Agent likes the tweet. Logs the liked URL to avoid duplicates. How to set up Add Phantombuster + SharePoint credentials. In SharePoint “Phantombuster” folder: • twitter_session_cookies.txt • twitter_posts_already_liked.csv (header postUrl) • profiles_twitter.csv (list of profiles). Profile CSV format Your profiles_twitter.csv must contain a header profileUrl and direct links to the Twitter profiles. Example: profileUrl https://twitter.com/elonmusk https://twitter.com/openai
by Fariez
X (Twitter) and Threads (by Meta) both have different maximum character lengths. Different X and Threads Content Auto Poster This n8n template demonstrates how to post different content optimized for X (Twitter) and Meta Threads using the Late API. You can use it for any niche. For example: posting AI news to X and Threads. Possible use cases: Schedule your posts to X and Threads. Use this workflow as a content calendar and automated posting system. Apply it across different content niches. How it works The automation runs according to the time defined in the Schedule Trigger node. Content is pulled from Google Sheets. Any URL is shortened using your preferred short URL API. Images are uploaded to Late’s server first. Content for X is posted in Step 2. The workflow checks that the content length is under 280 characters. Content for Threads is posted in Step 3. The workflow checks that the content length is under 500 characters. Posts on X are published as threaded posts, while on Threads they are single posts. Once posted, the Google Sheets content database is updated. Requirements Google OAuth credentials with the Google Sheets API enabled Bitly account and access token (or OAuth) GetLate API connected to your X and Threads accounts HOW TO USE STEP 1 Adjust the settings in the Schedule Trigger node to define when the workflow runs. Open this Google Sheets template, then go to File → Make a copy, and update the settings in the Get Topic node. Get your Bitly OAuth or Access Token here and add the credentials in the Short Link node. Get your API key from getlate.dev and add the credentials in the Upload IMG node. STEP 2 Add your Late credentials to the Post Twitter node. Get your Twitter account ID from Late, and update it in the JSON Body section of the Post Twitter node. STEP 3 Add your Late credentials to the Post Threads node. Get your Threads account ID from Late, and update it in the JSON Body section of the Post Threads node.