by Rahul Joshi
📊 Description Ensure your GitHub repositories stay configuration-accurate and documentation-compliant with this intelligent AI-powered validation workflow. 🤖 This automation monitors repository updates, compares configuration files against documentation references, detects inconsistencies, and alerts your team instantly—streamlining DevOps and compliance reviews. ⚡ What This Template Does Step 1: Triggers automatically on GitHub push or pull_request events. 🔄 Step 2: Fetches both configuration files (config/app-config.json and faq-config.json) from the repository. 📂 Step 3: Uses GPT-4o-mini to compare configurations and detect mismatches, missing keys, or deprecated fields. 🧠 Step 4: Categorizes issues by severity—critical, high, medium, or low—and generates actionable recommendations. 🚨 Step 5: Logs all discrepancies to Google Sheets for tracking and audit purposes. 📑 Step 6: Sends Slack alerts summarizing key issues and linking to the full report. 💬 Key Benefits ✅ Prevents production incidents due to config drift ✅ Ensures documentation stays in sync with code changes ✅ Reduces manual review effort with AI-driven validation ✅ Improves team response with Slack-based alerts ✅ Maintains audit logs for compliance and traceability Features Real-time GitHub webhook integration AI-powered config comparison using GPT-4o-mini Severity-based issue classification Automated Google Sheets logging Slack alerts with detailed issue context Error handling for malformed JSON or parsing issues Requirements GitHub OAuth2 credentials with repo and webhook permissions OpenAI API key (GPT-4o-mini or compatible model) Google Sheets OAuth2 credentials Slack API token with chat:write permissions Target Audience DevOps teams ensuring consistent configuration across environments Engineering leads maintaining documentation accuracy QA and Compliance teams tracking configuration changes and risks Setup Instructions Create GitHub OAuth2 credentials and enable webhook access. Connect your OpenAI API key under credentials. Add your Google Sheets and Slack integrations. Update file paths (config/app-config.json and faq-config.json) if your repo uses different names. Activate the workflow — it will start validating on every push or PR. 🚀
by Trung Tran
Create AI-Powered Chatbot for Candidate Evaluation on Slack > This workflow connects a Slack chatbot with AI agents and Google Sheets to automate candidate resume evaluation. It extracts resume details, identifies the applied job from the message, fetches the correct job description, and provides a summarized evaluation via Slack and tracking sheet. Perfect for HR teams using Slack. Who’s it for This workflow is designed for: HR Teams, **Recruiters, and Hiring Managers Working in software or tech companies using Slack, Google Sheets, and n8n Who want to automate candidate evaluation based on uploaded profiles and applied job positions How it works / What it does This workflow is triggered when a Slack user mentions the HR bot and attaches a candidate profile PDF. The workflow performs the following steps: Trigger from Slack Mention A user mentions the bot in Slack with a message like: @HRBot Please evaluate this candidate for the AI Engineer role. (with PDF attached) Input Validation If no file is attached, the bot replies: "Please upload the candidate profile file before sending the message." Extract Candidate Profile Downloads the attached PDF from Slack Uses Extract from File to parse the resume into text Profile Analysis (AI Agent) Sends the resume text and message to the Profile Analyzer Agent Identifies: Candidate name, email, and summary Applied position (from message) Looks up the Job Description PDF URL using Google Sheets Job Description Retrieval Downloads and parses the matching JD PDF HR Evaluation (AI Agent) Sends both the candidate profile and job description to HR Expert Agent Receives a summarized fit evaluation and insights Output and Logging Sends evaluation result back to Slack in the original thread Updates a Google Sheet with evaluation data for tracking How to set up Slack Setup Create a Slack bot and install it into your workspace Enable the app_mention event and generate a bot token Connect Slack to n8n using Slack Bot credentials Google Sheets Setup Create a sheet mapping Position Title → Job Description URL Create another sheet for logging evaluation results n8n Setup Add a Webhook Trigger for Slack mentions Connect Slack, Google Sheets, and GPT-4 credentials Set up agents (Profile Analyzer Agent, HR Expert Agent) with appropriate prompts Deploy & Test Mention your bot in Slack with a message and file Confirm the reply and entry in the evaluation tracking sheet Requirements n8n (self-hosted or cloud) Slack App with Bot Token OpenAI or Azure OpenAI account (for GPT-4) Google Sheets (2 sheets: job mapping + evaluation log) Candidate profiles in PDF format Defined job titles and descriptions How to customize the workflow You can easily adapt this workflow to your team’s needs: | Customization Area | How to Customize | |--------------------------|----------------------------------------------------------------------------------| | Job Mapping Source | Replace Google Sheet with Airtable or Notion DB | | JD Format | Use Markdown or inline descriptions instead of PDF | | Evaluation Output Format | Change from Slack message to Email or Notion update | | HR Agent Prompt | Customize to match your company tone or include scoring rubrics | | Language Support | Add support for bilingual input/output (e.g., Vietnamese & English) | | Workflow Trigger | Trigger from slash command or form instead of @mention |
by Swot.AI
Description This workflow lets you upload a PDF document and automatically analyze it with AI. It extracts the text, summarizes the content, flags key clauses or risks, and then delivers the results via Gmail while also storing them in Google Sheets for tracking. It’s designed for legal, compliance, or contract review use cases, but can be adapted for any document analysis scenario. Test it here: PDF Document Assistant 🔹 Instructions / Setup Webhook Input Upload a PDF document by sending it to the webhook URL. Extract from File The workflow extracts text from the uploaded PDF. Pre-processing (Code Node) Cleans and formats extracted text to remove unwanted line breaks or artifacts. Basic LLM Chain (OpenAI) Summarizes or restructures document content using OpenAI. Adjust the prompt inside to fit your analysis needs (summary, risk flags, clause extraction). Post-processing (Code Node) Further structures the AI output into a clean format (JSON, HTML, or plain text). AI Agent (OpenAI) Runs deeper analysis, answers questions, and extracts insights. Gmail Sends the results to a recipient. Configure Gmail credentials and set your recipient address. Google Sheets Appends results to a Google Sheet for record-keeping or audits. Respond to Webhook Sends a quick acknowledgment back to confirm the document was received. 🔹 Credentials Needed OpenAI API key (for Chat Model + Agent) Gmail account (OAuth2) Google Sheets account (OAuth2) 🔹 Example Use Case Upload a contract PDF → workflow extracts clauses → AI flags risky terms → Gmail sends formatted summary → results stored in Google Sheets.
by Daniel Shashko
How it Works This workflow transforms natural language queries into research reports through a five-stage AI pipeline. When triggered via webhook (typically from Google Sheets using the companion google-apps-script.js (GitHub gist), it first checks Redis cache for instant results. For new queries, GPT-4o breaks complex questions into focused sub-queries, optimizes them for search, then uses Bright Data's MCP Tool to find the top 5 credible sources (official sites, news, financial reports). URLs are scraped in parallel, bypassing bot detection. GPT-4o extracts structured data from each source: answers, facts, entities, sentiment, quotes, and dates. GPT-4o-mini validates source credibility and filters unreliable content. Valid results aggregate into a final summary with confidence scores, key insights, and extended analysis. Results cache for 1 hour and output via webhook, Slack, email, and DataTable—all in 30-90 seconds with 60 requests/minute rate limiting. Who is this for? Research teams needing automated multi-source intelligence Content creators and journalists requiring fact-checked information Due diligence professionals conducting competitive intelligence Google Sheets power users wanting AI research in spreadsheets Teams managing large research volumes needing caching and rate limiting Setup Steps Setup time: 30-45 minutes Requirements: Bright Data account (Web Scraping API + MCP token) OpenAI API key (GPT-4o and GPT-4o-mini access) Redis instance Slack workspace (optional) SMTP email provider (optional) Google account (optional for Sheets integration) Core Setup: Get Bright Data Web Scraping API token and MCP token Get OpenAI API key Set up Redis instance Configure critical nodes: Webhook Entry: Add Header Auth token Bright Data MCP Tool: Add MCP endpoint with token Parallel Web Scraping: Add Bright Data API credentials Redis Nodes: Add connection credentials All GPT Nodes: Add OpenAI API key (5 nodes) Slack/Email: Add credentials if using Google Sheets Integration: Create Google Sheet Open Extensions → Apps Script Paste the companion google-apps-script.js code Update webhook URL and auth token Save and authorize Test: {"prompt": "What is the population of Tokyo?", "source": "Test", "language": "English"} Customization Guidance Source Count:** Change from 5 to 3-10 URLs per query Cache Duration:** Adjust from 1 hour to 24 hours for stable info Rate Limits:** Modify 60/minute based on usage needs Character Limits:** Adjust 400-char main answer to 200-1000 AI Models:** Swap GPT-4o for Claude or use GPT-4o-mini for all stages Geographic Targeting:** Add more regions beyond us/il Output Channels:** Add Notion, Airtable, Discord, Teams Temperature:** Lower (0.1-0.2) for facts, higher (0.4-0.6) for analysis Once configured, this workflow handles all web research, from fact-checking to complex analysis—delivering validated intelligence in seconds with automatic caching. Built by Daniel Shashko Connect on LinkedIn
by Gabriela Macovei
WhatsApp Receipt OCR & Data Extraction Suite Categories: Accounting Automation • OCR Processing • AI Data Extraction • Business Tools This workflow transforms WhatsApp into a fully automated receipt-processing system using advanced OCR, multi-model AI parsing, and structured data storage. By combining LlamaParse, Claude (OpenRouter), Gemini, Google Sheets, and Twilio, it eliminates manual data entry and delivers instant, reliable receipt digitization for any business. What This Workflow Does When a user sends a receipt photo or PDF via WhatsApp, the automation: Receives the file through Twilio WhatsApp Uploads and parses it with LlamaParse (high-res OCR + invoice preset) Extracts structured data using Claude + Gemini + a strict JSON parser Cleans and normalizes the data (dates, ABN, vendor, tax logic) Uploads the receipt to Google Drive Logs the extracted fields into a Google Sheet Replies to the user on WhatsApp with the extracted details Asks for confirmation via quick-reply buttons Updates the Google Sheet based on user validation The result is a fast, scalable, human-free system for converting raw receipt photos into clean, structured accounting data. Key Benefits No friction for users:** receipts are submitted simply by sending a WhatsApp message. High-accuracy OCR:** LlamaParse extracts text, tables, totals, vendors, tax, and ABN with impressive reliability. Enterprise-grade data validation:** complex logic ensures the correct interpretation of GST, included taxes, or unidentified tax amounts. Multi-model extraction:** Claude and Gemini both analyse the OCR output for more reliable result. We have one primary LLM and a secondary one. Hands-off accounting:** every receipt becomes a standardized row in Google Sheets. Two-way WhatsApp communication:** users can confirm or reject extracted data instantly. Scalable architecture:** perfect for businesses handling dozens or thousands of receipts monthly. How It Works (Technical Overview) 1. Twilio → Webhook Trigger The workflow starts when a WhatsApp message containing a media file hits your Twilio webhook. 2. Initial Google Sheets Logging The MessageSid is appended to your tracking sheet to ensure every receipt is traceable. 3. LlamaParse OCR The file is sent to LlamaParse with the invoice preset, high-resolution OCR, and table extraction enabled. The workflow checks job completion before moving further. 4. LLM Data Extraction The OCR markdown is analyzed using: Claude Sonnet 4.5 (via OpenRouter) Gemini 2.5 Pro A strict structured JSON output parser Custom JS cleanup logic The system extracts: Vendor Cost Tax (with multi-rule Australian GST logic) Currency Date (parsed + normalized) ABN (validated and digit-normalized) 5. Google Drive Integration The uploaded receipt is stored, shared, and linked back to the record in Sheets. 6. Google Sheets Update Fields are appended/updated following a clean schema: Vendor Cost Tax Date Currency ABN Public drive link Status (Confirmed / Not confirmed) 7. User Response Flow The user receives a summary of extracted data via WhatsApp. Buttons allow them to approve or reject accuracy. The Google Sheet updates accordingly. Target Audience This workflow is ideal for: Accounting & bookkeeping firms Outsourced finance departments Small businesses tracking expenses Field workers submitting receipts Automation agencies offering DFY systems CFOs wanting real-time expense visibility Use Cases Expense reconciliation Automated bookkeeping Receipt digitization & compliance Real-time employee expense submission Multi-client automation at accounting agencies Required Integrations Twilio WhatsApp** (Business API number + webhook) LlamaParse API** OpenRouter (Claude Sonnet)** Google Gemini API** Google Drive** Google Sheets** Setup Instructions (High-Level) Import the n8n workflow. Connect your Twilio WhatsApp account. Add API credentials for: LlamaParse OpenRouter Google Gemini Google Drive Google Sheets Create your target Google Sheet. Configure your WhatsApp webhook URL in Twilio. Test with a sample receipt. Why This System Works Users send receipts using a tool they already use daily (WhatsApp). LlamaParse provides state-of-the-art OCR for low-quality receipts. Using multiple LLMs drastically increases accuracy for vendor, ABN, and tax extraction. Advanced normalization logic ensures data is clean and accounting-ready. Google Sheets enables reliable storage, reporting, and future integrations. End-to-end automation replaces hours of manual work with instant processing. Watch My Complete Build Process Want to see exactly how I built this entire AI design system from scratch? I walk through the complete development process on my YouTube channel
by Rahul Joshi
Automatically detect, classify, and document GitHub API errors using AI. This workflow connects GitHub, OpenAI (GPT-4o), Airtable, Notion, and Slack to build a real-time, searchable API error knowledge base — helping engineering and support teams respond faster, stay aligned, and maintain clean documentation. ⚙️📘💬 🚀 What This Template Does 1️⃣ Triggers on new or updated GitHub issues (API-related). 🪝 2️⃣ Extracts key fields (title, body, repo, and link). 📄 3️⃣ Classifies issues using OpenAI GPT-4o, identifying error type, category, root cause, and severity. 🤖 4️⃣ Validates & parses AI output into structured JSON format. ✅ 5️⃣ Creates or updates organized FAQ-style entries in Airtable for quick lookup. 🗂️ 6️⃣ Logs detailed entries into Notion, maintaining an ongoing issue knowledge base. 📘 7️⃣ Notifies the right Slack team channel (DevOps, Backend, API, Support) with concise summaries. 💬 8️⃣ Tracks & prevents duplicates, keeping your error catalog clean and auditable. 🔄 💡 Key Benefits ✅ Converts unstructured GitHub issues into AI-analyzed documentation ✅ Centralizes API error intelligence across teams ✅ Reduces time-to-resolution for recurring issues ✅ Maintains synchronized records in Airtable & Notion ✅ Keeps DevOps and Support instantly informed through Slack alerts ✅ Fully automated, scalable, and low-cost using GPT-4o ⚙️ Features Real-time GitHub trigger for API or backend issues GPT-4o-based AI classification (error type, cause, severity, confidence) Smart duplicate prevention logic Bi-directional sync to Airtable + Notion Slack alerts with contextual AI insights Modular design — easy to extend with Jira, Teams, or email integrations 🧰 Requirements GitHub OAuth2 credentials OpenAI API key (GPT-4o recommended) Airtable Base & Table IDs (with fields like Error Code, Category, Severity, Root Cause) Notion integration with database access Slack Bot token with chat:write scope 👥 Target Audience Engineering & DevOps teams managing APIs Customer support & SRE teams maintaining FAQs Product managers tracking recurring API issues SaaS orgs automating documentation & error visibility 🪜 Step-by-Step Setup Instructions 1️⃣ Connect your GitHub account and enable the “issues” webhook event. 2️⃣ Add OpenAI credentials (GPT-4o model for classification). 3️⃣ Create an Airtable base with fields: Error Code, Category, Root Cause, Severity, Confidence. 4️⃣ Configure your Notion database with matching schema and access. 5️⃣ Set up Slack credentials and choose your alert channels. 6️⃣ Test with a sample GitHub issue to validate AI classification. 7️⃣ Enable the workflow — enjoy continuous AI-powered issue documentation!
by Trung Tran
📘 Code of Conduct Q&A Slack Chatbot with RAG Powered > Empower employees to instantly access and understand the company’s Code of Conduct via a Slack chatbot, powered by Retrieval-Augmented Generation (RAG) and LLMs. 🧑💼 Who’s it for This workflow is designed for: HR and compliance teams** to automate policy-related inquiries Employees** who want quick answers to Code of Conduct questions directly inside Slack Startups or enterprises** that need internal compliance self-service tools powered by AI ⚙️ How it works / What it does This RAG-powered Slack chatbot answers user questions based on your uploaded Code of Conduct PDF using GPT-4 and embedded document chunks. Here's the flow: Receive Message from Slack: A webhook triggers when a message is posted in Slack. Check if it’s a valid query: Filters out non-user messages (e.g., bot mentions). Run Agent with RAG: Uses GPT-4 with Query Data Tool to retrieve relevant document chunks. Returns a well-formatted, context-aware answer. Send Response to Slack: Fetches user info and posts the answer back in the same channel. Document Upload Flow: HR can upload the PDF Code of Conduct file. It’s parsed, chunked, embedded using OpenAI, and stored for future query retrieval. A backup copy is saved to Google Drive. 🛠️ How to set up Prepare your environment: Slack Bot token & webhook configured (Sample slack app manifest: https://wisestackai.s3.ap-southeast-1.amazonaws.com/slack_bot_manifest.json) OpenAI API key (for GPT-4 & embedding) Google Drive credentials (optional for backup) Upload the Code of Conduct PDF: Use the designated node to upload your document (Sample file: https://wisestackai.s3.ap-southeast-1.amazonaws.com/20220419-ingrs-code-of-conduct-policy-en.pdf) This triggers chunking → embedding → data store. Deploy the chatbot: Host the webhook and connect it to your Slack app. Share the command format with employees (e.g., @CodeBot Can I accept gifts from partners?) Monitor and iterate: Improve chunk size or embed model if queries aren’t accurate. Review unanswered queries to enhance coverage. 📋 Requirements n8n (Self-hosted or Cloud) Slack App (with chat:write, users:read, commands) OpenAI account (embedding + GPT-4 access) Google Drive integration (for backups) Uploaded Code of Conduct in PDF format 🧩 How to customize the workflow | What to Customize | How to Do It | |-----------------------------|------------------------------------------------------------------------------| | 🔤 Prompt style | Edit the System & User prompts inside the Code Of Conduct Agent node | | 📄 Document types | Upload additional policy PDFs and tag them differently in metadata | | 🤖 Agent behavior | Tune GPT temperature or replace with different LLM | | 💬 Slack interaction | Customize message formats or trigger phrases | | 📁 Data Store engine | Swap to Pinecone, Weaviate, Supabase, etc. depending on use case | | 🌐 Multilingual support | Preprocess text and support locale detection via Slack metadata |
by Nikan Noorafkan
📊 Google Ads + OpenAI + Sheets — Monthly AI Performance Analysis Automate monthly ad performance insights with AI-powered recommendations 🧩 Overview This workflow automatically analyzes Google Ads performance every month, using the Google Ads API and OpenAI (GPT-4o) to uncover which ad themes, categories, and messages perform best. It then generates a structured AI report, saves it to Google Sheets, and sends a Slack summary to your marketing team. 💡 Perfect for digital marketers, agencies, and growth analysts who want automated campaign insights without manually crunching numbers. ⚙️ Features ✅ Automatically runs on the 1st of each month ✅ Fetches last 30 days of ad performance via Google Ads API (GAQL) ✅ Uses GPT-4o for natural-language insights & improvement ideas ✅ Groups ads by category and theme (e.g., “Free Shipping,” “Premium”) ✅ Generates a clean, formatted markdown report ✅ Archives reports in Google Sheets for trend tracking ✅ Notifies your Slack channel with AI-driven recommendations 🧠 Architecture | Component | Purpose | | ------------------- | ------------------------------------------------ | | n8n | Workflow engine | | Google Ads API | Source of ad performance data | | OpenAI (GPT-4o) | Analyzes CTR patterns and writes recommendations | | Google Sheets | Report archiving and history tracking | | Slack | Team notifications | 🧭 Workflow Logic (Summary) Monthly Trigger (1st of Month) ⬇️ 1️⃣ Get Performance Data (Google Ads API) Fetches 30-day CTR, clicks, impressions for all responsive search ads. ⬇️ 2️⃣ Prepare Performance Data Groups data by ad group and theme keywords, builds an AI prompt. ⬇️ 3️⃣ AI Agent (LangChain) + GPT-4o Analyzes patterns and generates actionable insights. ⬇️ 4️⃣ Generate Report (Code) Formats a Markdown report with AI recommendations and KPIs. ⬇️ 5️⃣ Save to Google Sheets Archives results for long-term analytics. ⬇️ 6️⃣ Send Report to Slack Delivers the summary directly to your marketing channel. 🔑 Environment Variables | Variable | Example | Description | | ------------------------ | ----------------------------- | ------------------------------ | | GOOGLE_ADS_CUSTOMER_ID | 123-456-7890 | Google Ads customer account ID | | GOOGLE_ADS_API_VERSION | v17 | Current Ads API version | | GOOGLE_SHEET_ID | 1xA1B2c3D4EFgH... | Target spreadsheet ID | | OPENAI_API_KEY | sk-xxxxx | OpenAI API key for GPT-4o | | SLACK_WEBHOOK_URL | https://hooks.slack.com/... | Slack incoming webhook | 🔐 Credential Setup | Service | Type | Required Scopes | | ----------------- | ----------------------------- | ---------------------------------------------- | | Google Ads | OAuth2 (googleAdsOAuth2Api) | https://www.googleapis.com/auth/adwords | | OpenAI | API key (openAiApi) | Full access | | Google Sheets | OAuth2 | https://www.googleapis.com/auth/spreadsheets | | Slack | Webhook | chat:write | 🧱 Node-by-Node Breakdown | Node | Purpose | Key Configuration | | ---------------------------------- | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | | Monthly Trigger | Starts workflow on 1st of every month | Cron: 0 0 1 * * | | Get Performance Data | Queries Ads data | Endpoint: https://googleads.googleapis.com/v17/customers/{id}/googleAds:searchQuery: GAQL (CTR, clicks, impressions, last 30 days) | | Prepare Performance Data | Aggregates and builds AI prompt | Groups by ad group and theme, computes CTRs | | AI Agent – Analyze Performance | Passes formatted data to GPT-4o | System message: “You are a Google Ads performance analyst…” | | OpenAI Chat Model (GPT-4o) | Analytical reasoning engine | Model: gpt-4o, Temperature 0.2 | | Generate Report | Parses AI output, formats Markdown report | Adds recommendations + next steps | | Save Report to Sheets | Archives report | Sheet name: Performance Reports | | Send Report (Slack) | Sends summary | Uses report_markdown variable | 🧠 AI Report Example 30-Day Performance Analysis Report Executive Summary Analyzed: 940 ads Period: Last 30 days Top Performing Categories Running Shoes: 9.4% CTR (120 ads) Fitness Apparel: 8.2% CTR (90 ads) Top Performing Themes "Free Shipping" messaging: 9.8% CTR (58 ads) "Premium" messaging: 8.5% CTR (44 ads) AI-Powered Recommendations [HIGH] Emphasize “Free Shipping” across more ad groups. Expected Impact: +5 % CTR [MEDIUM] Test “Premium Quality” vs. “New Arrivals.” Expected Impact: +3 % CTR Next Steps Implement new ad variations A/B test messaging Re-analyze next month 🧩 Testing Procedure 1️⃣ Temporarily disable the cron trigger. 2️⃣ Run the workflow manually. 3️⃣ Confirm: Google Ads node returns JSON with results. AI Agent output is valid JSON. Report is written to Sheets. Slack message received. 4️⃣ Re-enable the monthly trigger once verified. 🧾 Output in Google Sheets | Date | Ads Analyzed | Top Category | Top Theme | Key Recommendations | Generated At | | ---------- | ------------ | ------------- | ------------- | ---------------------------------- | ----------------- | | 2025-10-01 | 940 | Running Shoes | Free Shipping | “Add Free Shipping copy to 10 ads” | 2025-10-01T00:05Z | 🪜 Maintenance | Frequency | Task | | --------- | ----------------------------------------- | | Monthly | Review AI accuracy and update themes list | | Quarterly | Refresh Google Ads API credentials | | As needed | Update GAQL fields for new metrics | ⚙️ API Verification Endpoint: POST https://googleads.googleapis.com/v17/customers/{customer_id}/googleAds:search Scopes: https://www.googleapis.com/auth/adwords GAQL Query: SELECT ad_group_ad.ad.id, ad_group_ad.ad.responsive_search_ad.headlines, ad_group.name, metrics.impressions, metrics.clicks, metrics.ctr FROM ad_group_ad WHERE segments.date DURING LAST_30_DAYS AND metrics.impressions > 100 ORDER BY metrics.clicks DESC LIMIT 1000 ✅ Fully valid query — verified for GAQL syntax, fields, and resource joins. ✅ OAuth2 flow handled by n8n’s googleAdsOAuth2Api. ✅ Optional: add "timeout": 60000 for large accounts. 📈 Metrics of Success | KPI | Target | | -------------------------- | ---------------- | | Report accuracy | ≥ 95 % | | Monthly automation success | ≥ 99 % | | CTR improvement tracking | +3–5 % over time | 🔗 References Google Ads API Docs LangChain in n8n OpenAI API Reference Google Sheets API Slack Incoming Webhooks 🎯 Conclusion You now have a fully automated Google Ads performance analysis workflow powered by: Google Ads API** for granular metrics OpenAI GPT-4o** for intelligent recommendations Google Sheets** for archiving Slack** for team-wide updates 💡 Result: A recurring, data-driven optimization loop that improves ad performance every month — with zero manual effort.
by Muhammad Asadullah
Daily Blog Automation Workflow Fully automated blog creation system using n8n + AI Agents + Image Generation Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works 1. Research Agent (Topic Discovery) Triggers**: Runs on schedule (default: daily at 4 AM) Process**: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output**: Comprehensive research document with SERP analysis and content strategy 2. Writer Agent (Content Creation) Triggers**: Receives research from Agent 1 Process**: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output**: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content 3. Image Prompt Writer (Visual Generation) Triggers**: Receives blog content from Agent 2 Process**: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output**: Structured prompts for 3-6 images per blog post 4. Image Generation Pipeline Process**: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output**: Blog HTML with all images embedded 5. Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements 1. API Keys Needed Google Gemini API**: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional)**: For Kimi-K2-Instruct model (research/writing) Serper.dev API**: For Google Search (2,500 free searches/month) Wavespeed.ai API**: For Nano-Banana image generation Supabase Account**: For database and image storage 2. Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs 3. Workflow Configuration Update these placeholders: RSS Feed URLs**: Replace [your website's rss.xml] with your site's RSS feed Storage URLs**: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys**: Add your credentials to all HTTP Request nodes Niche/Brand**: Customize Research Agent system prompt with your industry keywords Writing Style**: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "...complete HTML with embedded images..." } Performance Notes Average runtime**: 15-25 minutes per blog post Cost per post**: ~$0.10-0.30 (depending on API usage) Image generation**: 10-15 seconds per image with Nano-Banana Retry logic**: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.
by explorium
Explorium Agent for Slack AI-powered Slack bot for business intelligence queries using Explorium API through MCP. Prerequisites Slack workspace with admin access Anthropic API key (You can replace with other LLM Chat) Explorium API Key 1. Create Slack App Create App Go to api.slack.com/apps Click Create New App → From scratch Give it name (e.g., "Explorium Agent") and select workspace Bot Permissions (OAuth & Permissions) Add these Bot Token Scopes: app_mentions:read channels:history channels:read chat:write emoji:read groups:history groups:read im:history im:read mpim:history mpim:read reactions:read users:read Enable Events Event Subscriptions → Enable Add Request URL (from n8n Slack Trigger node) Subscribe to bot events: app_mention message.channels message.groups message.im message.mpim reaction_added Install App Install App → Install to Workspace Copy Bot User OAuth Token (xoxb-...) 2. Configure n8n Import & Setup Import this JSON template Slack Trigger node: Add Slack credential with Bot Token Copy webhook URL Paste in Slack Event Subscriptions Request URL Anthropic Chat Model node: Add Anthropic API credential Model: claude-haiku-4-5-20251001 (You can replace it with other chat models) MCP Client node: Endpoint: https://mcp.explorium.ai/mcp Header Auth: Add Explorium API key Usage Examples @ExploriumAgent find tech companies in SF with 50-200 employees @ExploriumAgent show Microsoft's technology stack @ExploriumAgent get CMO contacts at healthcare companies `
by Jasurbek
Overview Automatically anonymize CVs/resumes while preserving professional information. Perfect for recruitment agencies ensuring GDPR compliance and bias-free hiring. Features Supports multiple file formats (PDF, DOCX, etc.) Multi-language support (preserves original language) Removes PII: names, emails, phones, addresses Preserves: skills, experience, dates, achievements Outputs professionally formatted PDF Requirements OpenAI API key (GPT-4 recommended) Stirling PDF service (self-hosted or cloud) n8n version 1.0+ Setup Instructions Configure OpenAI credentials Set up Stirling PDF API endpoint Update API key in HTTP Request nodes Activate workflow Test with sample CV Usage POST to webhook endpoint with CV file as UploadCV field. Use Cases Recruitment agencies (GDPR compliance) HR departments (bias-free screening) Job boards (candidate privacy)
by Oneclick AI Squad
This is an advanced n8n workflow for transforming product concepts into 3D showcase videos with AI packaging design and auto-rotation rendering. Workflow Features: 🎯 Core Capabilities: AI Product Concept Generation - Uses Claude Sonnet 4 to analyze product prompts and generate comprehensive 3D specifications Automated Packaging Design - DALL-E 3 generates professional packaging visuals Texture Map Generation - Creates PBR-ready texture maps for realistic materials 3D Scene Script Generation - Produces complete Blender Python scripts with: Product geometry based on shape Professional 3-point lighting (key, fill, rim) 360° rotation animation (8 seconds) Camera setup and render settings Preview Rendering - Generates photorealistic 3D preview images Video Processing - Handles encoding and upload to video hosting services Database Storage - Saves all showcase data for tracking Status Monitoring - Checks render progress with automatic retry logic 📋 Required Setup: API Credentials needed: Anthropic API (for Claude AI) OpenAI API (for DALL-E image generation) Replicate API (optional for additional rendering) Video hosting service (Cloudflare Stream or similar) PostgreSQL database 🔧 How to Use: Import the JSON - Copy the artifact content and import directly into n8n Configure Credentials - Add your API keys in n8n credentials manager Activate Workflow - Enable the webhook trigger Send Request to webhook endpoint: POST /product-showcase { "productPrompt": "A premium organic energy drink in a sleek aluminum can with nature-inspired graphics" } 📤 Output Includes: Product specifications (dimensions, materials, colors) Packaging design image URL Texture map URLs Downloadable Blender script 3D preview render Video showcase URL Rendering metadata