by Aslamul Fikri Alfirdausi
How it works This workflow is a professional-grade market intelligence tool designed to bridge the gap between search interest and social media engagement. It automates the end-to-end process of trend discovery and content strategy. Detection: Polls Google Trends RSS daily for rising regional search queries. Parallel Extraction: Concurrently triggers industrial-grade Apify actors to scrape TikTok, Instagram, and X (Twitter) without the risk of account bans. Data Aggregation: Uses custom JavaScript logic to clean and merge disparate data points, optimizing them for LLM processing. AI Analysis: Google Gemini Flash analyzes the data to identify core topics, sentiment, and trend strength. Granular Delivery: Delivers individual, structured reports for each identified trend directly to Discord via Webhooks. Set up steps API Credentials: Prepare your Apify API Token and Google Gemini API Key. Discord Setup: Create a Webhook in your Discord server and paste the URL into the Discord node. Regional Configuration: Set your target country code (e.g., JP, ID, US) in the "Edit Fields" node at the start of the workflow. Node Settings: Ensure all scraper nodes are set to "Continue on Fail" to maintain workflow resilience. Requirements Apify Account. Google Gemini API Key. Discord Server for report delivery.
by SpaGreen Creative
WhatsApp Bulk Number Verification in Google Sheets Using Unofficial Rapiwa API Who’s it for This workflow is for marketers, small business owners, freelancers, and support teams who want to automate WhatsApp messaging using a Google Sheet without the official WhatsApp Business API. It’s suitable when you need a budget-friendly, easy-to-maintain solution that uses your personal or business WhatsApp number via an unofficial API service such as Rapiwa. How it works / What it does The workflow looks for rows in a Google Sheet where the Status column is pending. It cleans each phone number (removes non-digits). It verifies the number with the Rapiwa verify endpoint (/api/verify-whatsapp). If the number is verified: The workflow can send a message (optional). It updates the sheet: Verification = verified, Status = sent (or leaves Status for the send node to update). If the number is not verified: It skips sending. It updates the sheet: Verification = unverified, Status = not sent. The workflow processes rows in batches and inserts short delays between items to avoid rate limits. The whole process runs on a schedule (configurable). Key features Scheduled automatic checks (configurable interval; recommended 5–10 minutes). Cleans phone numbers to a proper format before verification. Verifies WhatsApp registration using Rapiwa. Batch processing with limits to control workload (recommended max per run configurable). Short delay between items to reduce throttling and temporary blocks. Automatic sheet updates for auditability (verified/unverified, sent/not sent). Defaults recommended in this workflow Trigger interval: every 5–10 minutes (adjustable). Max items per run: configurable (example: 200 max per cycle). Delay between items: 2–5 seconds (example uses 3 seconds). How to set up Duplicate the sample Google Sheet: ➤ Sample Fill contact rows and set Status = pending. Include columns like WhatsApp No, Name, Message, Verification, Status. In n8n, add and authenticate a Google Sheets node pointed to your sheet. Create an HTTP Bearer credential in n8n and paste your Rapiwa API key. Configure the workflow nodes (Trigger → Google Sheets → Limit/SplitInBatches → Code (clean) → HTTP Request (verify) → If → Update Sheet → Wait). Enable the workflow and monitor first runs with a small test batch. Requirements n8n instance with Google Sheets and HTTP Request nodes enabled. Google Sheets OAuth2 credentials configured in n8n. Rapiwa account and Bearer token (stored in n8n credentials). Google Sheet formatted to match the workflow columns. Why use Rapiwa Cost-effective and developer-friendly REST API for WhatsApp verification and sending. Simple integration via HTTP requests and n8n. Useful when you prefer not to use the official WhatsApp Business API. Note: Rapiwa is an unofficial service — review its terms and risks before production use. How to customize Change schedule frequency in the Trigger node. Adjust maxItems in Limit/SplitInBatches for throughput control. Change the Wait node delay for safer sending. Modify the HTTP Request body to support media or templates if the provider supports it. Add logging or a separate audit sheet to record API responses and errors. Best practices Test with a small batch first. Keep the sheet headers exact and consistent. Store API keys in n8n credentials (do not hardcode). Increase Wait time or reduce batch size if you see rate limits. Keep a log sheet of verified/unverified rows for troubleshooting. Example HTTP verify body (n8n HTTP Request node) { "number": "{{ $json['WhatsApp No'] }}" } Notes and best practices Test with a small batch before scaling. Store the Rapiwa token in n8n credentials, not in node fields. Increase Wait delay or reduce batch size if you see rate limits or temporary blocks. Keep the sheet headers consistent; the workflow matches columns by name. Log API responses or errors for troubleshooting. Optional Add a send-message HTTP Request node after verification to send messages. Append successful and failed rows to separate sheets for easy review. Support & Community Need help setting up or customizing the workflow? Reach out here: WhatsApp: Chat with Support Discord: Join SpaGreen Server Facebook Group: SpaGreen Community Website: SpaGreen Creative Envato: SpaGreen Portfolio
by isaWOW
Description An intelligent AI-powered workflow that automates HR document creation for new hires. Upload candidate documents via form, and the system extracts details, auto-calculates joining dates, fills professional templates using GPT-4, and saves the final Offer Letter or Employment Contract directly to Google Docs—all in seconds. What this workflow does This automation handles your complete HR onboarding document pipeline: Form-based submission:** HR fills a simple form with candidate's Identity card, Resume, Job Role, Salary, and document type selection Smart data extraction:** Automatically extracts candidate name from uploaded Resume PDF and calculates joining date (1st of next month) Template selection:** Routes to Offer Letter or Employment Contract template based on HR's selection AI-powered filling:** Uses OpenAI GPT-4.1-mini to intelligently fill all placeholders while preserving exact formatting, line breaks, and emojis Google Docs output:** Saves the final professional document directly to Google Docs, ready to send to the candidate Setup requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Docs with OAuth access OpenAI API key (GPT-4.1-mini access) PDF documents: Candidate's Identity card and Resume Estimated setup time: 15–20 minutes Step-by-step setup 1. Connect Google Docs In n8n: Credentials → Add credential → Google Docs OAuth2 API Complete OAuth authentication Open "Save Document to Google Docs" node Create a new Google Doc or use an existing template document Copy the document URL and paste it in the documentURL field 2. Add OpenAI API credentials Get API key: https://platform.openai.com/api-keys In n8n: Credentials → Add credential → OpenAI API Paste your API key Open "OpenAI GPT-4.1 Mini Model" node Select your OpenAI credential Ensure model is set to gpt-4.1-mini 3. Customize document templates The workflow includes two pre-built templates. You can customize them: Offer Letter Template: Open "Load Offer Letter Template" node Edit the template_offer value Available placeholders: [Candidate Name], [Job Role], [Department/Team Name], [Company Name], [Joining Date], [Salary Details], [Work Location], [Reporting Manager/Team Lead], [Probation Period] Employment Contract Template: Open "Load Contract Template" node Edit the template_contract value Uses the same placeholders with additional terms and conditions sections Important: Keep all placeholders in square brackets [Placeholder Name] exactly as shown. The AI will replace them automatically. 4. Configure form fields (optional) Open "Receive Candidate Details via Form" node Default fields: Identity card (file), Resume (file), Job Role (text), Salary Details (number), Type (dropdown) Add optional fields if needed: Department Name, Work Location, Reporting Manager, Probation Period Copy the Form URL from the node settings Share this URL with your HR team 5. Test the workflow Open the Form URL in your browser Upload sample Identity card and Resume PDFs Fill Job Role: "Software Developer" Enter Salary Details: 50000 Select Type: "Offer Letter" Submit the form Check your Google Docs—the filled document should appear automatically Verify all placeholders are replaced correctly 6. Activate the workflow Toggle the workflow to Active at the top The form will now accept submissions 24/7 Each submission generates a new document in Google Docs How it works 1. Form submission HR opens the form link and uploads candidate documents (Identity card + Resume as PDFs), enters job details, and selects document type (Offer Letter or Contract). 2. Document processing The workflow splits uploaded files into separate items and auto-generates two dates: Form Date:** Current submission date Joining Date:** Automatically set to the 1st of next month 3. Text extraction Extracts text from both PDF documents using n8n's built-in Extract From File node. The candidate's name is typically pulled from the Resume. 4. Data aggregation Combines extracted text from both documents into a single data item for processing. 5. Template routing Checks the selected document type: If "Offer Letter" → loads the Offer Letter template If "Contract" → loads the Employment Contract template 6. AI template filling The AI Agent receives: The selected template with placeholders All form data (Job Role, Salary, etc.) Extracted candidate name from Resume Auto-calculated joining date GPT-4.1-mini fills every placeholder with actual data while strictly preserving: Line breaks and paragraph spacing Emojis and special characters Bold/italic formatting markers Email, phone, and web links 7. Google Docs save The final filled document is inserted into your Google Docs document. Each submission appends a new document, so you can maintain a running archive or clear the doc periodically. Key features ✅ Zero manual typing: Extract candidate name automatically from Resume PDF—no copy-paste needed ✅ Smart date calculation: Joining date auto-set to 1st of next month based on submission date ✅ Dual document types: Choose between simple Offer Letter or detailed Employment Contract with terms ✅ AI preserves formatting: GPT-4.1-mini maintains exact line breaks, emojis, and structure from templates ✅ Google Docs integration: Documents saved directly—no downloads, conversions, or file juggling ✅ Customizable templates: Edit both templates to match your company's tone, policies, and branding ✅ Form-based workflow: Share one URL with HR team—no n8n access needed for daily use Troubleshooting Google Docs not saving Re-authenticate OAuth credentials:** Go to Credentials → Google Docs OAuth2 API → Reconnect Check document URL:** Ensure the documentURL field contains a valid Google Docs link (not a Sheets link) Verify permissions:** Make sure the connected Google account has edit access to the document Candidate name not extracting correctly Resume format issue:** The workflow expects candidate name in the Resume PDF text. If your Resume format is unusual, you may need to adjust the extraction logic. Check extraction node:** Open "Extract Text from Identity card and Resume" node → Test execution → Verify text output Manual override:** Add a "Candidate Name" field to the form if automatic extraction fails AI not filling placeholders Check API key:** Verify OpenAI API key is active and has credits at https://platform.openai.com/usage Placeholder mismatch:** Ensure template placeholders exactly match the format [Placeholder Name] with square brackets Test AI node:** Click "Fill Template with AI" → Execute node → Check output for errors Joining date incorrect Timezone issue:** The date calculation uses server timezone. Verify your n8n instance timezone settings. Custom date needed:** If you want a different joining date logic (e.g., 15 days from submission), edit the "Split Documents and Calculate Dates" code node. Form not accepting file uploads File size limit:** Default n8n form limit is 16MB. Compress PDFs if larger. File type validation:** Ensure uploaded files are PDFs, not images or other formats. Browser issue:** Try a different browser (Chrome recommended for file uploads). Use cases HR teams at growing companies: Onboard 5-10 new hires per week without spending hours on document preparation. Generate consistent, professional documents in seconds. Recruitment agencies: Send offer letters to multiple candidates daily. Maintain brand consistency while scaling operations without adding admin staff. Startups and small businesses: Automate HR paperwork from day one. Focus on candidate experience instead of document formatting. Remote-first companies: Enable distributed HR teams to generate documents without shared drives or email chains. Single form link, instant output. Consulting firms: Create client-specific employment contracts with custom templates. Switch between contract types based on project requirements. Expected results Time savings:** 15-20 minutes saved per document (from 20 min manual → 2 min automated) Output quality:** Professional, error-free documents with consistent formatting every time Scalability:** Process 50+ candidates per week without additional HR headcount Error reduction:** Eliminate typos and placeholder mistakes common in manual copy-paste workflows Faster hiring:** Send offer letters within 5 minutes of candidate approval Workflow customization Add more form fields Open "Receive Candidate Details via Form" node and add custom fields: Department Name (dropdown with your teams) Work Location (dropdown: Remote, Office, Hybrid) Reporting Manager (text input) Probation Period (number input) Start Date (date picker for custom joining dates) These values automatically populate in templates if placeholders are added. Create additional templates Duplicate one of the "Load Template" nodes and create: Internship Offer Letters Part-time Contract Freelancer Agreement Probation Extension Letter Add a corresponding option in the form Type dropdown. Send documents via email Add an Email node after "Save Document to Google Docs": Attach the Google Docs link Send to candidate email (add email field to form) CC HR manager automatically Multi-language support Create template variations in different languages. Add a "Language" dropdown to the form and route to appropriate template. Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Yasir
🧠 Workflow Overview — AI-Powered Jobs Scraper & Relevancy Evaluator This workflow automates the process of finding highly relevant job listings based on a user’s resume, career preferences, and custom filters. It scrapes fresh job data, evaluates relevance using OpenAI GPT models, and automatically appends the results to your Google Sheet tracker — while skipping any jobs already in your sheet, so you don’t have to worry about duplicates. Perfect for recruiters, job seekers, or virtual assistants who want to automate job research and filtering. ⚙️ What the Workflow Does Takes user input through a form — including resume, preferences, target score, and Google Sheet link. Fetches job listings via an Apify LinkedIn Jobs API actor. Filters and deduplicates results (removes duplicates and blacklisted companies). Evaluates job relevancy using GPT-4o-mini, scoring each job (0–100) against the user’s resume & preferences. Applies a relevancy threshold to keep only top-matching jobs. Checks your Google Sheet for existing jobs and prevents duplicates. Appends new, relevant jobs directly into your provided Google Sheet. 📋 What You’ll Get A personal Job Scraper Form (public URL you can share or embed). Automatic job collection & filtering based on your inputs. Relevance scoring** (0–100) for each job using your resume and preferences. Real-time job tracking Google Sheet that includes: Job Title Company Name & Profile Job URLs Location, Salary, HR Contact (if available) Relevancy Score 🪄 Setup Instructions 1. Required Accounts You’ll need: ✅ n8n account (self-hosted or Cloud) ✅ Google account (for Sheets integration) ✅ OpenAI account (for GPT API access) ✅ Apify account (to fetch job data) 2. Connect Credentials In your n8n instance: Go to Credentials → Add New: Google Sheets OAuth2 API Connect your Google account. OpenAI API Add your OpenAI API key. Apify API Replace <your_apify_api> with your apify api key. Set Up Apify API Get your Apify API key Visit: https://console.apify.com/settings/integrations Copy your API key. Rent the required Apify actor before running this workflow Go to: https://console.apify.com/actors/BHzefUZlZRKWxkTck/input Click “Rent Actor”. Once rented, it can be used by your Apify account to fetch job listings. 3. Set Up Your Google Sheet Make a copy of this template: 📄 Google Sheet Template Enable Edit Access for anyone with the link. Copy your sheet’s URL — you’ll provide this when submitting the workflow form. 4. Deploy & Run Import this workflow (jobs_scraper.json) into your n8n workspace. Activate the workflow. Visit your form trigger endpoint (e.g. https://your-n8n-domain/webhook/jobs-scraper). Fill out the form with: Job title(s) Location Contract type, Experience level, Working mode, Date posted Target relevancy score Google Sheet link Resume text Job preferences or ranking criteria Submit — within minutes, new high-relevance job listings will appear in your Google Sheet automatically. 🧩 Example Use Cases Automate daily job scraping for clients or yourself. Filter jobs by AI-based relevance instead of keywords. Build a smart job board or job alert system. Support a career agency offering done-for-you job search services. 💡 Tips Adjust the “Target Relevancy Score” (e.g., 70–85) to control how strict the filtering is. You can add your own blacklisted companies in the Filter & Dedup Jobs node.
by Nikan Noorafkan
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. 🔗 Integrations Used Google Ads** → Fetch campaign and ad performance metrics using GAQL. Relevance AI** → Analyze performance data and regenerate ad copy using AI agents and tools. Channable** → Pull updated product feeds for ad refresh cycles. Google Sheets** → Save optimized ad copy for review and documentation. Slack** → Send a 30-day performance report to your marketing team. 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLE_ADS_API_VERSION | API version (e.g., v17). | | GOOGLE_ADS_CUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCE_AI_API_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCE_AGENT_PERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCE_KNOWLEDGE_SOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCE_TOOL_AD_COPY_ID | Relevance AI tool ID for generating ad copy. | | CHANNABLE_API_URL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLE_COMPANY_ID | Your Channable company ID. | | CHANNABLE_PROJECT_ID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLE_SHEET_ID | ID of your Google Sheet to store optimized ads. | | SLACK_WEBHOOK_URL | Slack Incoming Webhook URL for sending reports. | 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | 🧠 Example AI Insight Output { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 * *). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.
by Surya Vardhan Yalavarthi
What this workflow does This workflow automates the full machine learning lifecycle end-to-end using Claude AI as the intelligent decision-maker at every stage. Send one HTTP request with a dataset URL and a business goal — and the pipeline handles everything from raw CSV to a human-approved, documented model ready for GitHub. The pipeline runs in 5 sequential phases: Phase 1 — Strategy Claude Sonnet 4 receives the dataset URL, target variable, and business goal. It outputs a structured JSON plan covering feature ideas, algorithm choices, and the evaluation metric. A fallback parser ensures the pipeline continues even if the LLM output is slightly malformed. Phase 2 — Data Engineering The workflow fetches the CSV via HTTP Request and runs it through a custom quoted-field CSV parser (handles commas inside quoted name fields, common in datasets like Titanic). It drops rows with missing targets, imputes missing Age values, and encodes categorical columns (Sex, Embarked) into numeric form. Phase 3 — Feature Engineering Claude Haiku reviews the cleaned dataset and confirms the 3 best features to engineer. A Code node then creates FamilySize (SibSp + Parch + 1), IsAlone (binary flag), and TitleEncoded (extracted and mapped from passenger name). A row-count validation gate ensures no data is silently lost. Phase 4 — Training & Evaluation Three algorithms are trained from scratch in pure JavaScript — no external ML libraries required: Logistic Regression** via gradient descent (200 epochs) Random Forest** via 10 bagged decision stumps XGBoost** via gradient boosting with residual-based stump selection Precision, recall, F1, and accuracy are computed for each. Claude Sonnet then acts as an LLM judge: it reads all three result sets alongside the original business goal and selects the winner with a one-sentence justification. A deterministic fallback (highest F1) runs if the LLM response fails to parse. Phase 5 — HITL Deployment Claude Sonnet writes a structured MODEL_CARD.md covering model overview, performance metrics, training data summary, feature engineering decisions, intended use, and limitations. The full results are then posted to a Slack channel as a formatted approval request. A human can review the results and reply to approve or reject deployment. An optional Supabase audit log records each phase transition with timestamp, phase name, status, and run ID. Tested results Tested on the Titanic dataset (891 rows): | Model | F1 Score | Accuracy | |---|---|---| | Logistic Regression | 0.712 | 0.787 | | Random Forest | 0.739 | 0.804 | | XGBoost | 0.761 | 0.821 | Claude correctly identified XGBoost as the winner and generated a complete model card in under 10 seconds. What you need | Requirement | Details | |---|---| | Anthropic API key | Used in P1, P4 (Claude Sonnet 4), and P3 (Claude Haiku). Get at console.anthropic.com | | Slack Bot Token | OAuth bot token with chat:write scope. Bot must be invited to the target channel via /invite @bot-name | | Supabase project (optional) | For audit logging. Replace YOUR_PROJECT.supabase.co and YOUR_SUPABASE_SERVICE_ROLE_KEY in the 5 log nodes, or delete them | | Public CSV URL | The dataset must be reachable by your n8n instance via HTTP GET | Setup steps Import the workflow JSON into your n8n instance Add your Anthropic API credential and assign it to the 3 lmChatAnthropic nodes (P1, P3, P4) Add your Slack Bot Token credential and assign it to the P5 Slack node. Replace YOUR_SLACK_CHANNEL_ID with your real channel ID (e.g. C012AB3CD) (Optional) Set up the Supabase audit log table using the SQL in the setup sticky note, then replace the two placeholder values in the 5 log HTTP Request nodes Activate the workflow and send a test request: POST https://your-n8n-instance.com/webhook/mlops-v2 Content-Type: application/json { "dataset_url": "https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv", "target_variable": "Survived", "business_goal": "Predict passenger survival to optimise lifeboat boarding policy" } Extending the workflow The Phase 5 sticky note includes a tip for extending the HITL loop: add a Webhook node to receive the Slack approval callback and an If node to branch into a GitHub API call that commits the model card to a new repository. The model_card_b64 field (Base64-encoded model card content) is already assembled in the payload, ready to be passed directly to the GitHub Contents API. Node count & complexity 28 nodes** total (22 active, 6 sticky notes) 3 LLM calls** (Claude Sonnet ×2, Claude Haiku ×1) 5 JavaScript Code nodes** (all pure JS, no external libraries) 5 Supabase log nodes** (optional, deletable) 1 Slack node** Fan-out connections** used to run log nodes as parallel dead-ends without blocking the main data path Tags AI, Machine Learning, MLOps, Claude AI, Slack, Automation, Data Science, HITL, LLM
by Artem Boiko
A full-featured Telegram bot that accepts text descriptions, photos, or PDF floor plans and returns detailed cost estimates with work breakdown. Powered by GPT-4 Vision / Gemini 2.0, vector search, and the open-source DDC CWICR database (55,000+ construction rates). Who's it for Contractors & Estimators** who need estimates from any input format Construction managers** evaluating scope from site photos or drawings Architects** getting quick cost feedback on floor plans Real estate professionals** assessing renovation costs Project managers** doing rapid feasibility checks via mobile What it does Receives text / photo / PDF via Telegram Analyzes input with AI (Gemini 2.0 Flash or GPT-4 Vision) Extracts work items with quantities and units Searches DDC CWICR vector database for matching rates Generates professional HTML report with full cost breakdown Exports results as Excel or PDF Supports 9 languages: 🇩🇪 DE · 🇬🇧 EN · 🇷🇺 RU · 🇪🇸 ES · 🇫🇷 FR · 🇮🇹 IT · 🇵🇱 PL · 🇧🇷 PT · 🇺🇦 UK How it works ┌─────────────────────────────────────────────────────────────────────┐ │ TELEGRAM INPUT │ │ 📝 Text Description │ 📷 Construction Photo │ 📄 PDF Floor Plan │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ MAIN ROUTER │ │ Parse message → Detect content type → Route to handler (17 actions) │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌──────────────────────────┼──────────────────────────┐ ↓ ↓ ↓ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Text LLM │ │ Vision API │ │ Vision PDF │ │ Parse works │ │ Analyze photo │ │ Read floor plan│ │ from text │ │ GPT-4/Gemini │ │ Gemini 2.0 │ └─────────────────┘ └─────────────────┘ └─────────────────┘ └──────────────────────────┼──────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ CALCULATION LOOP │ │ For each work item: │ │ 1️⃣ Transform query → 2️⃣ Optimize search → 3️⃣ Get embedding │ │ 4️⃣ Qdrant search → 5️⃣ Score results → 6️⃣ AI rerank → 7️⃣ Calculate │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ OUTPUT │ │ 📊 Telegram message │ 🌐 HTML Report │ 📑 Excel │ 📄 PDF │ └─────────────────────────────────────────────────────────────────────┘ Input Types | Type | Description | AI Used | |------|-------------|---------| | 📝 Text | Work lists, specifications, notes | OpenAI GPT-4 | | 📷 Photo | Construction site photos (up to 4) | GPT-4 Vision / Gemini | | 📄 PDF | Floor plans, architectural drawings | Gemini 2.0 Flash | Route Actions (17 total) | # | Action | Description | |---|--------|-------------| | 0 | show_lang | Language selection menu | | 1 | ask_photo | Request photo upload | | 2 | lang_selected | Save language preference | | 3 | show_analyze | Photo analysis options | | 4 | analyze | Run AI vision analysis | | 5 | show_edit_menu | Edit work quantities | | 6 | works_updated | After quantity change | | 7 | ask_new_work | Add manual work item | | 8 | start_calc | Start cost calculation | | 9 | show_help | Display help message | | 10 | view_details | Show resource details | | 11 | export_excel | Generate CSV export | | 12 | export_pdf | Generate PDF export | | 13 | process_pdf | Analyze PDF floor plan | | 14 | analyze_text | Parse text description | | 15 | refine | Re-analyze with context | | 16 | fallback | Handle unknown input | Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ with Telegram Trigger | | Telegram Bot | Token from @BotFather | | OpenAI API | For embeddings + text parsing | | Gemini API | For Vision (photos/PDF) — or use GPT-4 Vision | | Qdrant | Vector DB with DDC CWICR collections | | DDC CWICR Data | github.com/datadrivenconstruction/DDC-CWICR | Setup 1. Configure 🔑 TOKEN Node { "bot_token": "YOUR_TELEGRAM_BOT_TOKEN", "AI_PROVIDER": "gemini", "GEMINI_API_KEY": "YOUR_GEMINI_KEY", "OPENAI_API_KEY": "YOUR_OPENAI_KEY", "QDRANT_URL": "http://localhost:6333", "QDRANT_API_KEY": "YOUR_QDRANT_KEY" } 2. Vision Provider Selection AI_PROVIDER: "gemini" → Gemini 2.0 Flash (recommended for photos + PDF) AI_PROVIDER: "openai" → GPT-4 Vision (photos only) 3. n8n Credentials Settings → Credentials → Add → Telegram API Enter bot token, save Select credential in Telegram Trigger node 4. Qdrant Collections Load DDC CWICR embeddings for target languages (example for Russian): RU_STPETERSBURG_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR 5. Activate & Test Activate workflow Send /start to your bot Select language → send photo/text/PDF Features | Feature | Description | |---------|-------------| | 📷 Photo Analysis | GPT-4 Vision or Gemini 2.0 for site photos | | 📄 PDF Processing | Floor plan analysis with room extraction | | 📝 Text Parsing | Natural language work lists | | 🔍 Vector Search | Semantic matching via Qdrant + OpenAI embeddings | | 🤖 AI Reranking | LLM-based result scoring for accuracy | | ✏️ Inline Editing | Modify quantities via Telegram buttons | | 📊 HTML Report | Professional expandable report with KPIs | | 📑 Excel Export | CSV with full work breakdown | | 📄 PDF Export | HTML-based PDF document | | 🌍 9 Languages | Full UI + database localization | | 💾 Session State | Multi-turn conversation support | | 🔧 Refine Mode | Re-analyze with additional context | Example Workflow User: /start Bot: Language selection menu (9 options) User: Selects 🇷🇺 Russian Bot: "Отправьте фото, PDF или текстовое описание работ" User: Sends bathroom photo Bot: "📷 Анализ фото... ⏳" Bot: Shows detected works: 🏠 Ванная комната — 4.5 m² Найдено 12 работ: Демонтаж плитки стен — 18 m² Демонтаж плитки пола — 4.5 m² Гидроизоляция пола — 4.5 m² Гидроизоляция стен — 8 m² Стяжка пола — 4.5 m² Укладка плитки стены — 18 m² Укладка плитки пол — 4.5 m² Установка унитаза — 1 шт Установка раковины — 1 шт Установка смесителя — 2 шт ... [✏️ Редактировать] [📊 Рассчитать] User: Taps 📊 Calculate Bot: Shows progress per item, then final result: ✅ Смета готова — 12 позиций 💰 Итого: ₽ 89,450 Работа: ₽ 35,200 (39%) Материалы: ₽ 48,750 (55%) Механизмы: ₽ 5,500 (6%) [📋 Детали] [↓ Excel] [↓ PDF] [↻ Заново] HTML Report Features KPI Cards:** Total cost, item count, labor days, cost breakdown % Expandable rows:** Click work item to show resources Resource tags:** Color-coded (Labor/Material/Machine) Scope of work:** Expandable detailed descriptions Quality indicators:** Match quality dots (high/medium/low) Responsive design:** Works on mobile and desktop Export buttons:** Expand/Collapse all Notes & Tips Photo tips:** Capture full room, include reference objects (doors, tiles) PDF support:** Works best with clear floor plans and room schedules Text input:** Supports lists, tables, free-form descriptions Rate accuracy:** Depends on DDC CWICR coverage for your region Session timeout:** User sessions persist across messages Extend:** Chain with CRM, project management, or notification tools Categories AI · Communication · Data Extraction · Document Ops Tags telegram-bot, construction, cost-estimation, gpt-4-vision, gemini, pdf-analysis, qdrant, vector-search, multilingual, html-report Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help construction, engineering, and technology firms implement: AI-powered estimation systems (text, photo, PDF) Multi-channel bot integrations (Telegram, WhatsApp, Web) Vector database solutions for construction data Multilingual cost database deployment Contact us to test with your data or adapt to your project requirements. Resources DDC CWICR Database:** GitHub Qdrant Documentation:** qdrant.tech/documentation Gemini API:** aistudio.google.com n8n Telegram Trigger:** docs.n8n.io ⭐ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Jainik Sheth
What is this? This RAG workflow allows you to build a smart chat assistant that can answer user questions based on any collection of documents you provide. It automatically imports and processes files from Google Drive, stores their content in a searchable vector database, and retrieves the most relevant information to generate accurate, context-driven responses. The workflow manages chat sessions and keeps the document database current, making it adaptable for use cases like customer support, internal knowledge bases, or HR assistant etc. How it works 1. Chat RAG Agent Uses OpenAI for responses, referencing only specific data from the vector store (data that is uploaded on google drive folder). Maintains chat history in Postgres using a session key from the chat input. 2. Data Pipeline (File Ingestion) Monitors Google Drive for new/updated files and automatically updates them in vector store Downloads, extracts, and processes file content (PDFs, Google Docs). Generates embeddings and stores them in the Supabase vector store for retrieval. 3. Vector Store Cleanup Scheduled and manual routines to remove duplicate or outdated entries from the Supabase vector store. Ensures only the latest and unique documents are available for retrieval. 4. File Management Handles folder and file creation, upload, and metadata assignment in Google Drive. Ensures files are organized and linked with their corresponding vector store entries. Getting Started Create and connect all relevant credentials Google Drive Postgres Supabase OpenAI Run the table creation nodes first to set up your database tables in Postgres Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Postgres) Start asking questions that leverage the agent's multiple reasoning approaches Customization (optional) This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases Note, if you're using a different nodes eg. file storage, vector store etc the integration may vary a little Prerequisites Google account (google drive) Supabase account OpenAI APIs Postgres account
by Neeraj Chouhan
Good to know: This workflow creates a WhatsApp chatbot that answers questions using your own PDFs through RAG (Retrieval-Augmented Generation). Every time you upload a document to Google Drive, it is processed into embeddings and stored in Pinecone—allowing the bot to respond with accurate, context-aware answers directly on WhatsApp. Who is this for? Anyone building a custom WhatsApp chatbot. Businesses wanting a private knowledge based assistant Teams that want their documents to be searchable via chat Creators/coaches who want automated Q&A from their PDFs Developers who want a no-code RAG pipeline using n8n What problem is this workflow solving? What this workflow does: ✅ Monitors a Google Drive folder for new PDFs ✅ Extracts and splits text into chunks ✅ Generates embeddings using OpenAI/Gemini ✅ Stores embeddings in a Pinecone vector index ✅ Receives user questions via WhatsApp ✅ Retrieves the most relevant info using vector search ✅ Generates a natural response using an AI Agent ✅ Sends the answer back to the user on WhatsApp How it works: 1️⃣ Google Drive Trigger detects a new or updated PDF 2️⃣ File is downloaded and its text is split into chunks 3️⃣ Embeddings are generated and stored in Pinecone 4️⃣ WhatsApp Trigger receives a user’s question 5️⃣ The question is embedded and matched with Pinecone 6️⃣ AI Agent uses retrieved context to generate a response 7️⃣ The message is delivered back to the user on WhatsApp How to use: Connect your Google Drive account Add your Pinecone API key and index name Add your OpenAI/Gemini API key Connect your WhatsApp trigger + sender nodes Upload a sample PDF to your Drive folder Send a test WhatsApp message to see the bot reply Requirements: ✅ n8n cloud or self-hosted ✅ Google Drive account ✅ Pinecone vector database ✅ OpenAI or Gemini API key ✅ WhatsApp integration (Cloud API or provider) Customizing this workflow: 🟢 Change the Drive folder or add file-type filters 🟢 Adjust chunk size or embedding model 🟢 Modify the AI prompt for tone, style, or restrictions 🟢 Add memory, logging, or analytics 🟢 Add multiple documents or delete old vector entries 🟢 Swap the AI model (OpenAI ↔ Gemini ↔ Groq, etc.)
by Margo Rey
AI-Powered Email Generation with MadKudu sent via Outreach.io This workflow researches prospects using MadKudu MCP, generates personalized emails with OpenAI, and syncs them to Outreach with automatic sequence enrollment. Its for SDRs and sales teams who want to scale personalized outreach by automating research and email generation while maintaining quality. ✨ Who it's for Sales Development Representatives (SDRs) doing cold outreach Business Development teams needing personalized emails at scale RevOps teams wanting to automate prospect research workflows Sales teams using Outreach for email sequences 🔧 How it works 1. Input Email & Research: Enter prospect email via chat trigger. Extract email and generate comprehensive account brief using MadKudu MCP account-brief-instructions. 2. Deep Research & Email Generation: AI Agent performs 6 research steps using MadKudu MCP tools: Account details (hiring, partnerships, tech stack, sales motion, risk) Top users in the account (for name-dropping opportunities) Contact details (role, persona, engagement) Contact web search (personal interests, activities) Contact picture web search (LinkedIn profile insights) Company value prop research AI generates 5 different email angles and selects the best one based on relevance. 3. Outreach Integration: Checks if prospect exists in Outreach by email. If exists: Updates custom field (custom49) with generated email. If new: Creates new prospect with email in custom field. Enrolls prospect in specified email sequence (ID 781) using mailbox (ID 51). Waits 30 seconds and verifies successful enrollment. 📋 How to set up Set your OpenAI credentials Required for AI research and email generation. Create a n8n Variable to store your MadKudu API key named madkudu_api_key Used for the MadKudu MCP tool to access account research capabilities. Create a n8n Variable to store your company domain named my_company_domain Used for context in email generation and value prop research. Create an Oauth2 API credential to connect your Outreach account Used to create/update prospects and enroll in sequences. Configure Outreach settings Update Outreach Mailbox ID (currently set to 51) in the "Configure Outreach Settings" node. Update Outreach Sequence ID (currently set to 781) in the same node. Adjust custom field name if using different field than custom49. 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click "Add" to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write, prospects.read, prospects.write, sequences.read Save in Outreach 7.Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the research steps Modify the AI Agent prompt to adjust the 6 research steps or add additional MadKudu MCP tools. Update Outreach configuration Change Mailbox ID (51) and Sequence ID (781) in the "Configure Outreach Settings" node. Update custom field mapping if using different field than custom49. Modify email generation Adjust the prompt guidelines, tone, or angle priorities in the "AI Email Generator" node. Change the trigger Swap the chat trigger for a Schedule, Webhook, or integrate with your CRM to automate prospect input.
by Aadarsh Jain
Document Analyzer and Q&A Workflow AI-powered document and web page analysis using n8n and GPT model. Ask questions about any local file or web URL and get intelligent, formatted answers. Who's it for Perfect for researchers, developers, content analysts, students, and anyone who needs quick insights from documents or web pages without uploading files to external services. What it does Analyzes local files**: PDF, Markdown, Text, JSON, YAML, Word docs Fetches web content**: Documentation sites, blogs, articles Answers questions**: Using GPT model with structured, well-formatted responses Input format: path_or_url | your_question Examples: /Users/docs/readme.md | What are the installation steps? https://n8n.io | What is n8n? Setup Import workflow into n8n Add your OpenAI API key to credentials Link the credential to the "OpenAI Document Analyzer" node Activate the workflow Start chatting! Customize Change AI model → Edit "OpenAI Document Analyzer" node (switch to gpt-4o-mini for cost savings) Adjust content length → Modify maxLength in "Process Document Content" node (default: 15000 chars) Add file types → Update supportedTypes array in "Parse Document & Question" node Increase timeout → Change timeout value in "Fetch Web Content" node (default: 30s)